Introduction

Docker Configs are a feature of Docker Swarm, that allow for the storing of non-sensitive information in a Docker Swarm cluster. They’re an alternative to bind mounts and environment variables.

One of their main characteristics is that they’re immutable. That means that once they’re created, they can’t be updated.

$ echo "This is my first config" | docker config create my-config -
3ysbxl9oq39qteo1wd57o3ttx
$ echo "This is my second config" | docker config create my-config -
Error response from daemon: rpc error: code = AlreadyExists desc = config my-config already exists

While this is desirable when it comes to making sure all replicas are running with the same configuration, it makes it more difficult to rotate the configurations of a running service. The details of this task are documented in the Docker documentation. In essence, you have to do the following:

# Create the first version of the config
$ echo "Config v1" | docker config create my-config -
# Create a service that uses that config
$ docker service create                                                        \
    --name my-service                                                          \
    --config source=my-config,target=/etc/my-service/my-service.conf,mode=0440 \
    my-service:latest
# Create a new, updated config
$ echo "Config v2" | docker config create my-config-v2 -
# Update the config so that it uses the new configuration
$ docker service update                                                               \
    --config-rm my-config                                                             \
    --config-add source=my-config-v2,target=/etc/my-service/my-service.conf,mode=0440 \
    my-service
# Delete the old config
$ docker config rm my-config

In my opinion, this solution leaves a lot to be desired, as it requires too much manual input. How can we improve it ?

Ansible to the rescue

Ansible is an IT automation tool. It can be used for a variety of tasks, from automating the creation of cloud infrastructure to building HTML templates. In this article, I’ll demonstrate how I use Ansible to automate the management of Docker Swarm services, and more specifically configuration rotation.

All the code for this demo is open source and available here, on my GitHub page.

Setup

This demonstration makes use of Vagrant to build a reproducible VM.

Clone the repository to your computer:

$ git clone https://github.com/CrispyBaguette/docker-swarm-config-ansible

Start the VM:

$ cd docker-swarm-config-ansible
$ vagrant up

This command will trigger the creation of a Debian 11 VM, as well as run the init.yaml playbook. This playbook will set up a Docker Swarm on the VM and might take a few minutes to complete.

At that point, no service has been defined. We’ll need to take a look at the next section for that.

Adding a config

Another playbook is present in the repository, but it hasn’t been run yet: rotate-config.yaml. Each subsequent run of this service will generate a new config and automatically update a service that makes use of it, without having to perform the steps described above. Let’s see how it works.

First of all, we make use of the templating capabilities of Ansible to generate a new config at each execution. The following template will result in an HTML page that displays the date at which it was rendered. It will serve as a useful stand-in for a real, manually updated config.

<!DOCTYPE html>
<html>
  <head>
    <title>Demo: Rotating Docker Swarm Configs with Ansible</title>
  </head>
  <body>
    <p>
      This file is stored as a Docker Swarm config. Since the file contents
      depend on the date and time at which the file is generated, each playbook
      run will result in a different config.
    </p>
    <p>This page was templated at : {{ ansible_date_time.iso8601 }}</p>
  </body>
</html>

The following snippets are taken from roles/config/tasks/main.yaml. To begin with, we build the above template and store it as base64 in a dictionary. We use a loop, as it makes it easier to add other configs in the future.

- name: Build the config template
  set_fact:
    conf_templates: "{{ conf_templates | combine({item: lookup('template', item) | b64encode }) }}"
  loop:
    - roles/config/templates/index.html.j2

Then comes the interesting part. We name the Docker Configs after their contents using a hash function before deploying them. The hashes are truncated to avoid hitting the maximum length of a Docker Config name.

- name: Deploy the configurations templates
  docker_config:
    name: "{{ (item | basename).split('.') | first }}_conf_{{ conf_templates[item] | hash('sha1') | truncate(10, end='') }}"
    data: "{{ conf_templates[item] }}"
    data_is_b64: true
    state: present
  loop:
    - roles/config/templates/index.html.j2
  register: config_templates_ids

Since the config names are computed at runtime, we need to map them to static, human friendly names to use in the rest of the playbook. We derive them from the template file names. Here, item.item refers to the loop variable from the previous task, and its value is during the first iteration is roles/config/templates/index.html.j2, while the value of item.invocation.module_args.name is the computed config name. We derive the static name from the name of the file on which it based. We end up with a dictionary that maps static config names to content-dependent values, e.g. {'index': 'index_conf_deadbeef69'}.

- name: Populate config name dictionary
  set_fact:
    config_names: "{{ config_names | combine( { (item.item | basename).split('.') | first : item.invocation.module_args.name } ) }}"
  loop: "{{ config_templates_ids.results | flatten(levels=1) }}"

At that point, Ansible can :

  • Build our configuration file,
  • Deploy said configuration to the Docker Swarm,
  • Store the hash-derived names in a dictionary that we can refer to in the next tasks.

Exposing the config through a service

It’s now time to define a service that will make use of all we’ve done so far. The following bit of YAML describes a Caddy deployment that will serve the configuration.

- name: Deploy a Caddy instance
  docker_stack:
    state: present
    name: demo_stack
    compose:
      - version: "3.8"
        services:
          caddy:
            image: caddy:2
            deploy:
              replicas: 1
            ports:
              - target: 80
                published: 8888
                protocol: tcp
                mode: ingress
            configs:
              - source: index_conf
                target: /usr/share/caddy/index.html
        configs:
          index_conf:
            name: "{{ config_names['index'] }}"
            external: true

We now have all the required pieces to add the service to the Docker Swarm. Run the following command to run the rotate-config.yaml playbook. It uses some files that were generated by Vagrant.

ansible-playbook                                                       \
  --private-key=.vagrant/machines/default/virtualbox/private_key       \
  -u vagrant                                                           \
  -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
  rotate-config.yaml

After the playbook has run, you will be able to access the service at localhost:8888. Each time the playbook is run, a new config will be generated and deployed, and the service will be updated to use it.

The generated configs can be listed using the following command:

vagrant ssh -c "sudo docker config ls"

Conclusion

In this post, I demonstrated how to use Ansible to manage Docker Swarm configurations. I used Vagrant to demonstrate the principles, but this is of course applicable to any Docker Swarm cluster.

This solution is not perfect, as it does not clean up unused configs. This is not a huge problem, as configurations are small (the maximum allowed size is 500kb). Still, since manually deleting unused configs can be a bit tedious and Docker does not clean up after itself, I wrote a Python script that takes care of that task. It’s available as a gist at this address. It checks the list of all configs against the list of configs that are still in use and deletes the ones that are obsolete. This script could also be integrated into our playbooks, either by downloading and running the script or by using the modules available within Ansible.

I hope you found this article useful and that it might help you as it would have helped me when I first encountered this problem.