Setup a swarm cluster with persistent storage in 10 minutes
Docker Swarm is a popular orchestration tool used for managing software containers, which allow applications to be built once and run on any platform. A Swarm cluster turns a group of Docker hosts into a single large virtual server for your containerized applications.
While containers provide a great way to create, manage and run applications efficiently, their transient nature mean containers lose all data when deleted. This is a problem for applications such as databases where persisting data is essential.
Why Persistent Storage?
Persistent storage is a key requirement for virtually all enterprises, because most systems enterprises run require data that can be saved and tapped into later. These include systems which provide insights into consumer behavior and delivers actionable leads into what customers are looking for.
Persistent storage is desirable for container-based applications and Swarm clusters because data can be retained after applications running inside those containers are shut down. However, most deployments today rely on external storage systems for data persistence.
In public cloud deployments, this means using managed services such as EBS, S3 and EFS. On-premise deployments typically use traditional NAS and SAN storage solutions which are cumbersome and expensive to operate.
Setup Swarm Cluster with Cloud Native Storage
CIO (container I/O) is software-defined storage that integrates directly with Docker Swarm. It is designed to make cloud-native clusters and the applications and services running inside them more self-sufficient and portable, by providing highly available storage as a service.
This how-to guide installs Portainer for container management using a graphical user interface and Storidge’s CIO software for storage management.
After completing the steps below you will have a four node Docker Swarm cluster, with persistent volumes for your stateful applications, and a simple to use container management Web UI.
Let’s get started!
Setup cluster resources to orchestrate. You’ll need:
- Physical or virtual hosts. You can use
docker-machine createto provision virtual hosts. Here are examples using VirtualBox and DigitalOcean.
- Since we want data protected and available, each host will need a minimum of four drives, one for the boot volume and three data drives attached for CIO use.
- Configure networking to allow SSH connections across all nodes.
Step 1. Install cio on each node
CIO currently supports CentOS 7.4 (3.10 kernel), RHEL 7 (3.10 kernel) and Ubuntu 14.04LTS (3.13 kernel). After verifying you have a supported distribution, run the demo script below to begin installation:
root@c1:~# curl -fsSL ftp://download.storidge.com/pub/free/demo | sudo bash Started installing release 2148 at Tue Jan 30 12:47:26 PST 2018 Loading cio software for: u16 (4.4.0-104-generic) ... ... ... latest: Pulling from portainer/portainer Digest: sha256:232742dcb04faeb109f1086241f290cb89ad4c0576e75197e902ca6e3bf3a9fc Status: Image is up to date for portainer/portainer:latest Finished at Tue Jan 30 12:48:04 PST 2018 cio software installation complete. cio requires a minimum of 3 local drives per node for data redundancy Please verify local drives are available, then run 'cioctl create' command on primary node to start a new cluster
Repeat the cio installation on all nodes that will be members of the cluster. Single node installs are supported for simple testing.
Note: The use of convenience scripts is recommended for dev environments only, as root permissions are required to run them.
Step 2. Initialize the cluster
Start a CIO cluster with the
cioctl create command. This first node (c1) is designated the sds controller node (primary). The output includes a command to join new nodes to the cluster.
[root@c1 ~]# cioctl create Cluster started. The current node is now the primary controller node. To add a storage node to this cluster, run the following command: cioctl join 192.168.3.95 root e61ed303 After adding all storage nodes, return to this node and run following command to initialize the cluster: cioctl init e61ed303
Add new nodes (c2, c3, c4) to cluster with the
cioctl join command.
[root@c2 ~]# cioctl join 192.168.3.95 root e61ed303 Adding this node to cluster as a storage node
[root@c3 ~]# cioctl join 192.168.3.95 root e61ed303 Adding this node to cluster as a storage node
[root@c4 ~]# cioctl join 192.168.3.95 root e61ed303 Adding this node to cluster as a storage node
cioctl initcommand on the primary node to complete setup of the cluster.
[root@c1 ~]# cioctl init e61ed303 cluster: initialization started ... ... cluster: MongoDB ready cluster: Synchronizing VID files cluster: Starting API
Note: For physical servers with SSDs, the initialization process will take about 30 minutes longer the first time. This extra time is used to characterize the available performance in the cluster. This performance information is used in CIO’s quality-of-service (QoS) feature to deliver guaranteed performance for individual applications.
At the end of initialization, you have a four node Docker Swarm cluster running. There are three manager nodes and one worker node. Confirm with:
[root@c1 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS nwqtezilmyxckm6p2dyafpp4z * c1 Ready Active Leader y2pnmgccgtdnuj8pkf0g4prav c2 Ready Active Reachable n1xyc8idy7j4uf3mfr1nr6puy c3 Ready Active Reachable 1aupjv0y5nusrm4b33bul470e c4 Ready Active
In addition, Portainer UI is running in a container. Verify with:
[root@c1 ~]# docker service ps portainer ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS hv3voi6t8sdl portainer.1 portainer/portainer:latest c1 Running Running 16 minutes ago
Run Services in the Swarm Cluster
With the cluster up, let’s login to the Portainer GUI and start running applications. Portainer is running on node c1. We can check the IP (192.168.3.95) with:
[root@c1 ~]# cio nodes NODENAME IP NODE_ID ROLE STATUS c1 192.168.3.95 05ef170d sds normal c2 192.168.3.53 cfe94ce7 backup1 normal c3 192.168.3.29 976f5d0c backup2 normal c4 192.168.3.129 c62e8b5a standard normal
In this example, point the browser at 192.168.3.95:9000, where 9000 is the default Portainer service port number.
Assign an admin password and you’ll see the dashboard:
Select Volumes in the sidebar, and click the Add Volume button at the top. This enters the Create a Volume page.
Enter the volume name ‘nginx’, select the cio driver to create the persistent storage and use a pre-defined profile NGINX from the Profile drop-down menu.
Click the Create a Volume button and you now have persistent storage for your website!
Next, select App Templates from the sidebar and click Container to bring up a list of application templates. Select the Nginx template to bring up the configuration page.
Enter ‘nginx’ for the container name and click Show Advanced Options to enter settings for port and volume mapping. Enter ‘8000’ for the host port and select the ‘nginx’ volume created earlier for the /usr/share/nginx/html path in the container.
Hit Deploy the Container button and the Swarm scheduler will launch the Nginx container with a persistent volume.
It’s easy to deploy a Docker Swarm cluster which support both stateless and stateful applications. What was missing is an out-of-box cloud native storage that anyone can deploy without having to go through a long integration project first. This is key to developing and deploying next-gen applications fast.