Attachment to servers originates from manual setups. It creates a scenario where they are each configured manually and when problems arise, they are nursed back to health. This is the basis of the pets vs. cattle argument in dev ops. When treating server as cattle, if one is broken, it is removed from the running pool and a functional replacement is inserted in its place.
The context in which this is used is veleda.io. It consists of 3 database services, Nginx as the reverse proxy, Jekyll as the static site generator, Grafana for data visualization and a Flask app for handling user management. All of these services combined form Veleda, from the client-side a single application.
Since it was designed to be open-sourced, a simple setup and reusability were important metrics. There were several possible approaches:
- Manually configure the services on a VPS
- Create Docker containers and mount local volumes
- Use Docker compose for stack setup automation
- Use multi-server orchestration to enable scaling
The first approach would have taken the least time to a running version of the website. Git would have provided adequate versioning support for the user management service and the setup would have used Linux init scripts to automatically launch the services on system boot. However, this would have greatly limited the convenience factor for others to set up their instances of Veleda. Therefore, open sourcing would not have been hindered, but the ability of others to contribute back would have. This created an incentive to use Docker.
The easiest way of using Docker is directly from the command line with the
docker run command. This allows deploying the huge range of Docker images from Docker Hub. In combination with mounting local directories into the Docker containers, it is trivial to launch the required services and connect them. The main problem is that bind mounts (mounting host directories) are host-dependent. In regards to the needs of the Veleda project, they are not portable, the storage has to be handled outside of Docker and can not be shared across servers.
This is were Docker Compose comes into play. It uses a YAML configuration to define the services, their configurations and links to one another. This allows the whole stack to be instantiated with a single command,
docker-compose up. This means that as long as a server has Docker and Docker compose installed, starting the whole application is a cinch. Additionally, the use of Docker’s named volumes allows the data management to be moved within Docker’s control. The advantages of this include improved overview, better sharing between containers and isolation from the host machine.
The fourth option is to directly use a server orchestration tool such as Ansible, Chef or Docker Swarm. These codify the individual tasks that in their sum become processes of deploying and upgrading the application and servers (devops.com). At this level it becomes possible to automate the replacement of servers and scale the application without down-time (ansible.com). This, however, overshoots the current requirements of Veleda. Nonetheless, with growth in mind, the transition to this state should be straight forward.
This is how the decision to base the application on Docker Compose came to be. Docker Compose allows all the application’s services to be started together and be codified. This enables versioning of the setup and replication on other’s servers. It provides an overview of the connections and allows the stack to be started up and torn down with a single configuration. While looking forward, enabling Docker Swarm is greatly simplified when the application simply has to be transitioned from Docker Compose. Since they share a configuration, the main work relies on switching out the
docker-compose with the
docker swarm command.
To complete the task of server provisioning, Terraform has been shown to accomplish this well. It extends Docker Swarm’s functionality well in that one handles the provisioning and the other the service orchestration. Once the user demands out scale what a single server can handle, this is critical in moving away from treating servers as pets. Additionally, load balancing and automated data replication have to be implemented to allow the application to scale. Taking all of this into regard, choosing Docker Compose promises the least headaches when it comes to application growth.