I now run an entire Openstack Swift cluster that of course, is in production. The way its currently setup calls for for loops to be used to easily manage it. Of course, those get lame after a “quick” while. So I decided to take my second (the first was pretty bad, it was a Monday) plunge into Ansible.

These are the voyages of the Ansible Noobcake. Its five-hour mission: to automate configuration parameters across all swift nodes, to boldly cfg like no man has cfg’ed before.

Let’s dive in.

  • I launched dockerfile/ansible and went to work bind mounting a shared dir and setting up my private key. This was really easy:)
[root@720b24a24660 swift-cluster]# ansible --version
ansible 1.8.4
  configured module search path = None
  • I created a simple playbook.yml file and started playing around. Here’s an example (also, a friends’s):
- hosts: swift-storage
  tasks:
  - name: Update Swift configuration parameters
 	 lineinfile:
    	dest: /etc/swift/account-server.conf
    	regexp: ^workers
    	line: "workers = 8"
    	state: present
  • That was all dandy and all, and once I verified something simple would work, it was time to move on to bigger and better….which meant utilizing best practices, primarily relating to the directory layout. I eventually ended up with the following:

```a rough tree command output…apparantly markdown doesn’t like 4 consecutive underscores…still looks ok;) |swift-cluster | |site.yml | |tasks | | |system-enhancements.yml | | |swift-enhancements.yml | | |disk-enhancements.yml | |handlers | | |main.yml | |staging.hosts | |prod.hosts | |vars | | |main.yml


* Now it was time to get some of the basics out of the way.
	1. I think of `swift-cluster` as a "deployment". In the deployment, there are a set of hosts, either prod or staging, which have tasks performed on them (for this, I didn't utilize roles). Those tasks inherit both variables but also a set of handlers they can call (i.e. set new swift config parameters and now need to kick services). 
    2. With this setup, I merely invoke the deployment as a whole using something like `ansible-playbook site.yml -i staging.hosts`.
    3. With this, we always assume that the deployment can be re-ran without an issue. This means we need to use any *state* options provided by the core (and extra) ansible modules. We also need to be mindful of already existent configurations, for example, fstab lines that we want to replace and not dupliate (which will happen if you're not careful)
   
* Splitting out files wasn't too bad. It also means each file can have less syntaxually going on. Here's a bit of my *system-enhancements.yml*:


  • name: Sysctl time_wait optimizations

sysctl: name: net.ipv4.tcp_tw_recycle value: 1 sysctl_file: /etc/sysctl.d/swift-enhance.conf state: present reload: yes

sysctl: name: net.ipv4.tcp_tw_reuse value: 1 sysctl_file: /etc/sysctl.d/swift-enhance.conf state: present reload: yes


> Note that you don't have to specify a "name", however, it makes the deployment output text more sane!

* The next thing I had to start thinking about was the *site.yml* file. This is main anchor for telling ansible what to actually do. Here's what mine currently looks like:

  • hosts: swift-storage tasks:
    • include: tasks/swift-enhancements.yml
    • include: tasks/system-enhancements.yml
    • include: tasks/disk-enhancements.yml handlers:
    • include: handlers/main.yml vars_files:
    • “vars/main.yml” ```

This is fairly self-explanatory. You can see some great examples of similar dir structures and site.yml files here and here.

  • Lastly, I want to mention some work I did when handling /etc/fstab. At first, I tried to use the lineinfile module to replace individual lines in our current fstab files:
UUID=a76907ea-3344-40cb-a210-49c78c7a948f /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda2 during installation
UUID=898034ff-370a-45f0-bb85-31fa7fee07ad none            swap    sw              0       0
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,logbufs=8
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,logbufs=8
/dev/sdd /srv/node/sdd xfs noatime,nodiratime,logbufs=8
/dev/sde /srv/node/sde xfs noatime,nodiratime,logbufs=8
/dev/sdf /srv/node/sdf xfs noatime,nodiratime,logbufs=8
/dev/sdg /srv/node/sdg xfs noatime,nodiratime,logbufs=8
/dev/sdh /srv/node/sdh xfs noatime,nodiratime,logbufs=8

So I wrote out and setup the regex, setup the parenthesized subgroups, etc..

- name: insert proper mount options for storage disks
  lineinfile:
    dest: /etc/fstab
    regexp: '^/dev/(\w+)\ {{ dirprefix }}'
    line: '/dev/\1 {{ dirprefix }}/\1 {{ filesystem }} {{ mountoptions }}'
    state: present
    backrefs: yes

Hoorayy!! It worked the first time (I serously have no clue how the hell that happened)!!! However, from the module page:

For state=present, the pattern to replace if found; only the last line found will be replaced.

Umm, not cool :( I didn’t see any other way to make lineinfile do this. In the arms of devastation, svg @ #ansible suggested the mount module!!

- name: insert proper mount options for storage disks
  mount:
    name: "{{ dirprefix }}/{{ item }}"
    opts: "{{ mountoptions }}"
    fstype: "{{ filesystem }}"
    src: /dev/{{ item }}
    state: present
    ## These two don't act properly when cleared out...specifically, state=present doesn't work when they are empty.
    #passno: ""
    #dump: ""
  with_items:
    - sdb
    - sdc
    - sdd
    - sde
    - sdf
    - sdg
    - sdh

^That is what I ended up with. It might make more sense with my vars/main.yml file:

---
filesystem: xfs
mountoptions: nobarrier,noatime,nodiratime,logbufs=8
dirprefix: /srv/node

I then decided I wanted to erase the old entries simultaneously (since they didn’t have a mount option I wanted). The final task file looks like this:

- name: clean out old fstab entries if they exist
  lineinfile:
    dest: /etc/fstab
    regexp: '.*\s+noatime,nodiratime,logbufs=8'
    #line: '/dev/\1 {{ dirprefix }}/\1 {{ filesystem }} {{ mountoptions }}'
    state: absent
    backrefs: yes


- name: insert proper mount options for storage disks
  mount:
    name: "{{ dirprefix }}/{{ item }}"
    opts: "{{ mountoptions }}"
    fstype: "{{ filesystem }}"
    src: /dev/{{ item }}
    state: present
    ## These two don't exactly need to be empty, just following convention with how things were.
    #passno: ""
    #dump: ""
  with_items:
    - sdb
    - sdc
    - sdd
    - sde
    - sdf
    - sdg
    - sdh

This was a bit cleaner and although I couldn’t just regex my way out of defining each device name, I still like it better!

Overall, there was a lot I didn’t cover here but from what I can tell, Ansible seems like a decently flexible solution for configuration management…and I love only needing SSH ;P

Mario Loria is a builder of diverse infrastructure with modern workloads on both bare-metal and cloud platforms. He's traversed roles in system administration, network engineering, and DevOps. You can learn more about him here.