Docker swarm – Load balancing core Ubuntu 20.04 – Part 1


Today, we’ll learn how to host your core site for load balancing inside a Docker swarm on Ubuntu 20.04. Docker swarm, at a simplest level, is a group (cluster) of machines. The machines can be physical or virtual. Each machine is called a node. Each node can have a role of a leader or a worker. Docker swarm is a container orchestration tool just like Kubernetes. It helps you manage your containers hosted across machines.

This is Part 1 of the article. I wanted to focus on native docker capabilities to orchestrate/manage containers. Consequently, I picked up Docker swarm. I’ll be covering Kubernetes in upcoming blogs.

Note: You can promote all nodes into a leader role. But for the sake of simplicity, we’ll keep one leader and others as worker nodes. Here, I’ve created three virtual machines. First one as leader and other two as worker nodes. I use Bitvise SSH client to login into my Ubuntu Servers. You can use putty if you like. Both support great copy pasting of commands. Since I use virtual machines, Bitvise easily helps me transfer files from host to guest machines via a GUI.

The IP addresses of my machines are:

NodeNode detailsIP AddressMachine Name
1Leader node192.168.26.128jupiter
2Worker node 1192.168.26.129saturn
3Worker node 2192.168.26.130neptune

This is a first step for load balancing your site. I’ll be using the same core hello world docker container which I created in my previous article.


  1. Run sudo apt-get update on both the worker nodes.
  2. Ensure OpenSSH server is installed on all machines while OS installation.
  3. Ensure that the docker service is installed on both, Worker node 1 and 2 machines. You can refer this for installation.
  4. You don’t need to install any other software tools or packages on Worker node 1 and 2. Just docker is enough. Don’t install dotnet core sdk or any thing else for this article. We’ll keep Worker node 1 and 2 as clean as possible.
  5. Remember to append sudo to all your commands or just type sudo -s in the beginning to skip typing sudoevery time.
  6. The Leader node has the core hello world web application installed. The container image originally created also resides here.


  1. Host an core inside a docker container on a leader node.
  2. Leader node will replicate your container to all worker nodes automatically.

Prepare the leader node

We’ll quickly setup Docker swarm. It just boils down to once single command to initialize the swarm. When you run docker system info, it shows that the swarm is inactive.

Assuming this is the first time you are creating the swarm, run: docker swarm init

Swarm initialized: current node (hmvgx5st2rgqw0ybz6frw7bew) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-289ioi145whp5tg80zpb4sd9ilc57axa3a56epra3li321a3xy-e6sj9l1zlgjdlhht8qex1hh52

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Your output will not be the same but similar. Now, you have promoted your first machine to a manager. Importantly, in the output itself you will see the command (docker swarm join) you need to run on Worker node 1 and 2. This will make the worker nodes join the swarm.

On leader node, run: docker node ls. You will see:

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ofbzz2nk2o29y0epytdvaky6m *   jupiter             Ready               Active              Leader              19.03.8

As a next step, SSH into your Worker node 1 and 2 machines. Run individually:

sudo docker swarm join --token SWMTKN-1-289ioi145whp5tg80zpb4sd9ilc57axa3a56epra3li321a3xy-e6sj9l1zlgjdlhht8qex1hh52

Important note: All subsequent commands should be run on the Leader node. They will not run on the Worker nodes.

It will give you a message that the node has joined the swarm. Once you have ensured that both the machines have joined the swarm, you can confirm by running:

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ofbzz2nk2o29y0epytdvaky6m *   jupiter             Ready               Active              Leader              19.03.8
29gi2dej0bk66yehy9jqm1if3     neptune             Ready               Active                                  19.03.8
gs2piolqn814eurpcj5f6qq3r     saturn              Ready               Active                                  19.03.8

Now when you run, docker system info, it shows that the swarm is active. We’ll now prepare to host the container into the swarm.

Install Docker compose

First run,
sudo curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Then give execute permissions to Docker compose,
sudo chmod +x /usr/local/bin/docker-compose

Check if docker compose is correctly installed:
docker-compose --version
Output will be something like: docker-compose version 1.25.5, build 8a1c60f6

Create registry to replicate containers

You need to create a registry to be able to replicate containers on all worker nodes

So run: docker service create --name registry --publish published=5000,target=5000 registry:2

overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

Run docker node ls and check the output:

ID                  NAME                MODE                REPLICAS            IMAGE                  PORTS
wzx4cntqs5ca        registry            replicated          1/1                 registry:2             *:5000->5000/tcp

Now switch to ./apps/aspnet-hw/aspnetcoreapp/ folder

We need to create a docker-compose.yml file. Type nano docker-compose.yml and paste the following contents inside it.

version: '3'

  build: .
     - "8000:5000"

Note: The image for the web app is built using the Dockerfile defined above. It’s also tagged with – the address of the registry created earlier. This is important when distributing the app to the swarm.

Push docker image to swarm

To distribute the web app’s image across the swarm, it needs to be pushed to the registry you set up earlier. With the help of Compose, this is very simple, run:

docker-compose push
Pushing web (
The push refers to repository []
2089e92f307d: Pushed
d28d93422a14: Pushed
98b87267e8cd: Pushed
731d9ef100ab: Pushed
097cf5cb984a: Pushed
4c1434a6c15b: Pushed
c2adabaecedb: Pushed
latest: digest: sha256:cdb85f4a62aaaddbb6f32ce900fd03e8c5d10997715f0c70fcc162c3ae5b8051 size: 1792

The stack is now ready to be deployed:

docker stack deploy --compose-file docker-compose.yml counter-image
Ignoring unsupported options: build

Creating network counter-image_default
Creating service counter-image_web

Run docker service ls

ID            NAME                MODE                REPLICAS            IMAGE                                 PORTS
kngu8zmvnxbt  counter-image_web   replicated          1/1          *:8000->5000/tcp
kug3cnpctqad  registry            replicated          1/1                 registry:2                            *:5000->5000/tcp

Replication complete

Done! Now your container will be replicated to Worker nodes 1 and 2 seamlessly. It might take a while say a minute or two. But since our container size is small, the replication would be very quick (under a minute).

Now you need to test your replication:

On Leader machine type: curl http://localhost:8000. It will show HTML output.
On Leader machine type: curl It will show HTML output.
On Leader machine type: curl It will show HTML output.
As an alternative, on the Worker node 1 and 2 machines, you can also type: curl http://localhost:8000 and it will still show HTML output.

To check the status of your container, run: docker stack services counter-image

ID                  NAME                MODE                REPLICAS            IMAGE                                 PORTS
kngu8zmvnxbt        counter-image_web   replicated          1/1          *:8000->5000/tcp


In the first part, we covered how to host your core site for load balancing inside a Docker swarm on Ubuntu 20.04. In the second part of this article, we will see implementing and testing load balancing.

Clean up

Remove a node from a swarm: docker node rm <machinename> --force

Leave a swarm, run this from the Worker node: sudo docker swarm leave --force

Bring down the stack, run: docker stack rm counter-image

Tear down registry, run: docker service rm registry

Destroy the swarm, run:
First, on Worker nodes: docker swarm leave --force
Last, on Leader node: docker swarm leave --force


Sometimes, when you reboot Worker nodes, they remain in “Down” status and not reachable by the swarm. In such cases, after rebooting the Worker node, just restart docker service on the worker node and things will be fine.

systemctl restart docker

If you forget the swarm joining token, just run on the Leader node:

docker swarm join-token manager

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *