Docker


Backing up to S3: The Containerized Way

I recently decided to jump into the object storage revolution (yeah, I’m a little late). This drive comes from my very old archives I’d like to store offsite but also to more easily streamline how I deploy applications which have things like data directories and databases that need to be backed up. The Customary Lately, through my work at Arbor and my own personal dabbling, I’ve come to love the idea that a service may depend on one or more containers to function.

Continue reading ↦

Handling Cron inside your container

Sometimes, you need an application to run at a scheduled time. Ideally, it would be a really cool feature if you could merely tell the docker daemon to do this via some sort of schedule: * 1 * * * in your docker-compose.yml. Sadly this isn’t really possible. So you have two options: Source your image from a container which has cron installed. Merely install cron yourself. Either way, there are a few things you need to watch out for.

Continue reading ↦

autofs in docker containers

Today I started writing up a backupninja container for work. This container needs to be able to: Login into some of our prod boxes Store backup data on an NFS share The logical choice for handling the back-end was to use autofs because of its ability to handle mounts that may drop out for whatever reason, and since we really need our storage available, doing a plain mount is just not going to cut it.

Continue reading ↦

Difference between ENTRYPOINT and CMD in Dockerfiles

A lot of people don’t get the difference to this and I think creack over at stackoverflow did a great job explaining this: Docker has a default entrypoint which is /bin/sh -c but does not have a default command. The command is run via the entrypoint. i.e., the actual thing that gets executed is /bin/sh -c bash. This allowed docker to implement RUN quickly by relying on the shell’s parser.

Continue reading ↦

Giving non-root users' power over <1024

I needed a quick and dirty way to allow a non-root user to use lower ports. This is because I’m starting to launch docker containers where the CMD process is run as a non-root user. The first container I thought this might work well for is my docker-ncat-proxy container which runs ncat as the nobody user. Using linux capabilities, we can set a binary to be launched without locking its binding capabilities using the setcap command.

Continue reading ↦

Keeping Dockerfiles sane: Some important tips

An excellent article sent to me by a friend pointing out some of the important things to do/remember when creating Dockerfiles. You should also check out: Official Docker documentation best practices Michael Crosby’s take 2 Some key things to remember from a top level standpoint whilst getting started: Try to be “lean”. Your app is just that, your app, and usually, it should be the only thing running inside a container.

Continue reading ↦

My jump into coreos, the tiny, docker-centric distribution:)

Booting into the livecd, its pretty basic: Setup networking with “ip addr add” etc.. commands: # ip addr add <address>/<masklen> dev eth0 # ip link set dev eth0 up # ip route add default via <default gw> Set root user password and log in via ssh Do something similar to below, basically create a cloud-config, and call the coreos-install command. ?[~]> sh [email protected] [email protected]'s password: [email protected]'s password: CoreOS (stable) Update Strategy: No Reboots [email protected] ~ # export http_proxy=http://proxy.

Continue reading ↦

Figure out the total space used by docker

This isn’t as easy as you think…a normal du -h doesn’t work on /var/lib/docker. This is because of the aufs filesystem docker uses that du by default skips. The proper way to figure out how much space is actually being used involves a few more arguments: docker -shx /var/lib/docker



Understanding Docker Images

So besides how great it is to be able to just pull down a docker image, theres actually a bit more advanced things you can do in terms of manipulating an image. The following points will give you a better understanding of how to work with, create, and modify images for your own projects:) The two ways to get an image… A registry. A docker registry (i.e. registry.hub.docker.com) allows you to easily pull an entire image locally for utilizing to create other images or just start a container.

Continue reading ↦

Using docker: An Introductory guide (Part 1)

Docker gives you the ability to run linux containers or “chroot on steroids” which utilize a layered approach using device-mappper or aufs to enable users to create images, build containers off of them, and deploy applications quickly for both development and production (and maintain uniformity!) Before we start, virtually any major service/application has been “*dockerized*” meaning at least one person has made a docker repo for it! For examples, just do searches like “*docker-nginx*” or “*docker-powerdns*”.

Continue reading ↦

Start a docker container to play with, then save it!

docker run --rm -t -i phusion/baseimage:0.9.11 /bin/bash I use the baseimage-docker distro from phusion…its quite nice…includes bash, runit, and a few other nice features that make it feel like a full featured install that will work properly with docker (i.e. docker stop works correctly) The “*–rm*” will remove the container after you leave it. This is generally preferred. We just launch bash in this example. You could make your own image and launch it with zsh or the like:)

Continue reading ↦

Saving docker images without a registry

There is a pretty convienient way to save your docker images you build without needing to commit them to a registry: docker save mynewimage > /tmp/mynewimage.tar Then, to use it on a new host: docker load < /tmp/mynewimage.tar Thanks James!