Docker Swarm Mode – Features, Nodes and Filters

FREE Online Courses: Elevate Skills, Zero Cost. Enroll Now!

Today, we will cover up the complete concept of Docker Swarm Mode. Though, to understand Swarm mode well, it is important to know about Swarm also. So, we will discuss the meaning of Swarm.

Also, we will look at features and nodes in Docker Swarm. Moreover, we will discuss Docker Swarm filters and load balancing. At last, we will see the task and services in Docker Swarm.

The cluster management and orchestration features which are embedded in Docker Engine is what we call Swarm mode. Basically, the Docker Engine runs in swarm mode when we initialize a new swarm (cluster) or join nodes to a swarm. 

So, let’s begin Docker Swarm tutorial.

What is Swarm in Docker?

As we have mentioned above that by using swarm kit only the cluster management and orchestration features embedded in the Docker Engine are built. Well, a separate project which implements Docker’s orchestration layer and which is directly used within Docker is what we call Swarmkit.

Basically, it consists of multiple Docker hosts that run in swarm mode and act as managers and workers. Here, managers manage the membership and delegation and workers run swarm services. However, a given Docker host can perform the role of a manager, a worker, or both.

Though, while we create a service, we define its optimal states such as network and storage resources available to it, number of replicas, ports the service exposes to the outside world, and many more. And, Docker actually works to maintain that desired state.

For example, if any worker node becomes unavailable then Docker schedules that node’s tasks on other nodes. Moreover, a task is a running container that is part of a swarm service and managed by a swarm manager, as opposed to a standalone container.

Features of Docker Swarm

Cluster management integrated with Docker Engine:

In order to create a swarm of Docker Engines where we can deploy application services, we use the Docker Engine CLI. That says we don’t need additional orchestration software to create or manage a swarm.

i. Scaling

We can declare the number of tasks we want to run, for each service. The swarm manager automatically adapts by adding or removing tasks to maintain the desired state, whenever we scale up or down.

ii. Multi-host networking

Also, we can specify an overlay network for our services. When swarm manager initializes or updates the application, it automatically assigns addresses to the containers on the overlay network.

iii. Service discovery

Nodes of Swarm manager assign each service in the swarm a unique DNS name as well as load balances running containers. And, through a DNS server embedded in the swarm, we can query every container running in the swarm.

iv. Load balancing

Basically, we can expose the ports for services to an external load balancer. Moreover, the swarm lets us specify how to distribute service containers between nodes, internally.

Docker Swarm – Nodes

In simple words, an instance of the Docker engine participating in the swarm is what we call a Node. We can also call it a Docker node. As a fact we can run one or more nodes on a single physical computer or cloud server.

However, production swarm deployments involve Docker nodes which are distributed across multiple physical as well as cloud machines.

Do you know the Docker Use Cases

As a  process, we submit a service definition to a manager node, in order to deploy our application to a swarm. In addition, the manager node dispatches units of work which are known as tasks to worker nodes.

Services and Tasks in Docker Swarm

Generally, the definition of the tasks to execute on the manager or worker nodes is what we call Services. It is the primary root of user interaction with the swarm and also the central structure of the swarm system.

Also, we specify which container image to use and which commands to execute inside running containers, when we create a service.

Moreover, the swarm manager distributes a specific number of replica tasks among the nodes on the basis of the scale we set in the desired state, in the replicated services model. And, the swarm runs one task for the service on every available node in the cluster, for global services.

While it comes to a task, it carries a Docker container and the commands to run inside the container. Also, it is the atomic scheduling unit of the swarm.

Also, according to the number of replicas set in the service scale, manager nodes assign tasks to worker nodes. As soon as a task is assigned to a node, then it cannot move to another node, so either it can only run on the assigned node or it will fail.

Load Balancing in Docker Swarm

Simply, to expose the services we want to make available externally to the swarm, the swarm manager uses ingress load balancing. There is a flexibility that either we can configure a PublishedPort for the service or the swarm manager can automatically assign the service a PublishedPort.

Also, we can specify any unused port. Though, the swarm manager assigns the service a port in the 30000-32767 range, if we do not specify a port.

In addition, the external components, like cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service.

There is an internal DNS component in Swarm mode which automatically assigns each service in the swarm a DNS entry. Basically, in order to distribute requests among services within the cluster on the basis of the DNS name of the service, the swarm manager uses internal load balancing.

Docker Swarm Filters

There are five filters for scheduling containers in Swarm:

i. Constraint – The key/value pairs associated with particular nodes is what we call Constraint. We also call them node tags.

ii. Affinity – The Affinity filter tells one container to run next to another based on an identifier, image or label in order to ensure containers run on the same network node.

iii. Port – Basically ports represent a unique resource, with this filter. If somehow a container tries to run on a port which is already occupied, it will move to the next node in the cluster.

iv. Dependency – If there is a time when containers depend on each other, then this filter schedules them on the same node.

v. Health — When there is a time when a node is not functioning properly, then this filter will prevent scheduling containers on it.

So, this was all about Docker Swarm Mode. Hope you like our explanation.

Conclusion

Hence, we have seen whole about Docker Swarm mode in this article. However, if you want to ask any doubts regarding Docker swarm mode, feel free to ask through the comment section. Hope it helps!

Did you like our efforts? If Yes, please give DataFlair 5 Stars on Google

follow dataflair on YouTube

1 Response

  1. adminlabs says:

    I have a java-app running as docker service and three services(es1, es2 & es3) of ES cluster running as docker services with each running a single task/container.

    My java-app connects to es1 service on 9300 port. likewise es2 & es3 can be access on 9300 port being similar Elasticsearch services and also in the same overlay network.

    So if i remove es1 service and request my java-app to connect to es1(i know it won’t be able as es1 service is not running) would then docker-engine be able to switch and connect to either es2 and es3 service being similar elasticsearch services.(i believe it won’t. as es2 & es3 are not replicas of es1 but are separate services).

    Does docker have this capability OR can docker make java-app/ or a service read these three elasticsearch services address from a file and pick first avaiable service(Without using another LB service like haproxy)

Leave a Reply

Your email address will not be published. Required fields are marked *