Docker Swarm using DietPi

Setting up a Docker Swarm cluster doesn’t have to be complicated. Here’s how I deploy and configure Docker Swarm on DietPi—a lightweight, well-maintained OS that runs on everything from Raspberry Pi to virtual machines.

Why DietPi?

DietPi strikes the perfect balance: it’s lightweight, actively maintained, runs on tons of devices, and is incredibly easy to configure. Whether you’re using Raspberry Pi, other single-board computers, or even virtual machines on Proxmox or VMWare, DietPi makes cluster deployment straightforward.

Note: This guide assumes you’re using Raspberry Pi or similar SBCs. If you’re running on Proxmox or VMWare, some steps—especially editing dietpi.txt—will differ slightly.

Download, write, and edit DietPi image

First, follow the official DietPi installation instructions to download and write the image to your SD card.

Once written to the SD card, continue with the next step.

OS installation

Insert the SD card into your computer and use your favorite editor to modify dietpi.txt:

nano /sdcard/boot/DietPi.txt

Edit the following values to match your locale and preferences:

# Keyboard layout e.g.: "gb" / "us" / "de" / "fr"
AUTO_SETUP_KEYBOARD_LAYOUT=us
# Time zone e.g.: "Europe/London" / "America/New_York" | Full list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
AUTO_SETUP_TIMEZONE=Europe/Amsterdam

Be sure to change the hostname to something useful like Node1:

# Hostname
AUTO_SETUP_NET_HOSTNAME=<insert-hostname-here>

I use OpenSSH for better compatibility with Ansible later on:

##### Software options #####
# SSH server choice: 0=none/custom | -1=Dropbear | -2=OpenSSH
AUTO_SETUP_SSH_SERVER_INDEX=-2

Make sure automated setup will run:

##### Non-interactive first run setup #####
# On first login, run update, initial setup and software installs without any user input
# - Setting this to "1" is required for AUTO_SETUP_GLOBAL_PASSWORD and AUTO_SETUP_INSTALL_SOFTWARE_ID.
# - Setting this to "1" indicates that you accept the DietPi GPLv2 license, available at /boot/DietPi-LICENSE.txt, superseding AUTO_SETUP_ACCEPT_LICENSE.
AUTO_SETUP_AUTOMATED=1
# Software to automatically install
# - Requires AUTO_SETUP_AUTOMATED=1
# - List of available software IDs: https://github.com/MichaIng/DietPi/wiki/DietPi-Software-list
# - Add as many entries as you wish, one each line.
# - DietPi will automatically install all dependencies, like ALSA/X11 for desktops etc.
# - E.g. the following (without the leading "#") will install the LXDE desktop automatically on first boot:
AUTO_SETUP_INSTALL_SOFTWARE_ID=17 103 105 130 134

This configuration installs OpenSSH, Git, Docker, Docker Compose, and Python3 automatically. From here, you can use Ansible for further automation.

For Raspberry Pi or other SBC users: Repeat these steps for each device—write the image, edit the configuration, then insert the SD card into your device.

For VM users: After editing dietpi.txt, enable the first-boot service in your running VM:

sudo systemctl enable DietPi-firstboot.service

Reboot, and the system will configure itself automatically.

Docker installation

Docker and Docker Compose are already installed and configured by DietPi’s automated setup. Now we can move on to creating the swarm.

Initializing Docker Swarm

The installation process is straightforward. These steps are adapted from the Turing Pi deployment documentation.

On your master node (in this example, Node1 with IP 10.0.0.60 and hostname cube01), log in as root and initialize the swarm:

docker swarm init --advertise-addr 10.0.0.60

You should see output similar to this:

Swarm initialized: current node (myjwx5z3m7kcrplih1yw0e2sy) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-2gwqttpup0hllec6p6xkun8nmht4xu18g09vsxyjhlyqc9sgjw-729yfmz5rfg02eiw0537m49c1 10.0.0.60:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

From the output, we have two important things:

  • docker swarm join –token 10.0.0.60:2377
  • docker swarm join-token manager

The first command in this output is used to join additional worker nodes to your Docker Swarm cluster, while the second command will allow us to print a command to join more managers. You can have just one manager node, but in our case it’s recommended to have 3. This is for the simple reason that when one manager fails, another manager will be elected as a leader and everything will continue working. You could set 4 managers, but according to the documentation, the result is the same you can loose just one manger. More information can be found in official documentation HERE.

You can always use command:

docker swarm join-token worker

To get the original worker token command.

Joining managers

It’s worth noting that Docker Swarm’s node roles work differently than Kubernetes. In Swarm, any node can be a manager, and all manager nodes are also workers by default. This flexibility makes Swarm simpler to manage for smaller clusters.

Let’s make Node2 and Node3 managers as well:

# Get the join command from current manager node
docker swarm join-token manager

# On Node2 and 3 use that command to join, in our case:
docker swarm join --token SWMTKN-1-2gwqttpup0hllec6p6xkun8nmht4xu18g09vsxyjhlyqc9sgjw-eba5cbn1o4zv449w441bndfv0 10.0.0.60:2377

Joining workers

On Node4, execute the join command using the worker token from the master node:

root@cube04:~# docker swarm join --token SWMTKN-1-2gwqttpup0hllec6p6xkun8nmht4xu18g09vsxyjhlyqc9sgjw-729yfmz5rfg02eiw0537m49c1 10.0.0.60:2377
This node joined a swarm as a worker.

Verifying the cluster

On any of the manager node, run “docker node ls”:

root@cube02:~# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
myjwx5z3m7kcrplih1yw0e2sy     cube01     Ready     Active         Leader           23.0.0
hox556neondt5kloil0norswb *   cube02     Ready     Active         Reachable        23.0.0
m9a7tbcksnjhm01bs8hyertik     cube03     Ready     Active         Reachable        23.0.0
ylkhufgruwpq2iafjwsw01h4r     cube04     Ready     Active                          23.0.0

If you are on a worker node, this command will not work. You can easily find which node is Leader at the moment with:

root@cube02:~# docker info | grep -A1 'Manager Addresses'
Manager Addresses:
    10.0.0.60:2377

To promote a worker node to a leader node in a Docker Swarm cluster, you can run the following command in the terminal:

docker node promote <node-name>

Replace with the hostname of the node you want to promote. This command will change the node’s role from worker to leader, granting it additional privileges and responsibilities within the swarm. Note that in a Docker Swarm cluster, there can only be one active leader node at a time, and promoting a node to leader will demote the current leader to a worker.

Fixing network conflicts

During testing, I ran into an issue with port publishing. The automatically created overlay ingress network was in the same IP range as my nodes—which causes conflicts.

My nodes use IPs in the 10.0.0.x range, and the ingress network was configured with:

{
    "Subnet": "10.0.0.0/8",
    "Gateway": "10.0.0.1"
}

You can check your ingress network configuration with these commands:

root@cube01:~# docker network ls
NETWORK ID     NAME                      DRIVER    SCOPE
c7ba0aae930a   bridge                    bridge    local
48ce906c3544   docker_gwbridge           bridge    local
5c6001c2110e   host                      host      local
kpiocqticjlx   ingress                   overlay   swarm
fb28177a7b9a   none                      null      local
k55h53e1e97d   portainer_agent_network   overlay   swarm
# Using the ID: kpiocqticjlx of ingress network
docker network inspect --format='{{json .IPAM.Config}}' kpiocqticjlx

If the subnet range conflicts with your node IPs, you’ll need to recreate the ingress network in a different range:

docker network rm ingress

# Create in different range
docker network create --driver overlay --ingress --subnet 172.16.0.0/16 --gateway 172.16.0.1 ingress

After recreating the ingress network, restart all your nodes for the changes to take effect.

Shared storage

For shared storage across your swarm, I’ll get back to this topic in a future post about distributed storage. For now, I recommend setting up a simple NFS or SMB share to provide persistent storage for your containers.

A basic NFS setup works well for most home lab scenarios and is much simpler to configure than distributed storage solutions. You can mount the share on all nodes and use it for Docker volumes that need to be accessible across the cluster.

References