Archive for the ‘ Guides ’ Category

Understanding Docker images

So besides how great it is to be able to just pull down a docker image, theres actually a bit more advanced things you can do in terms of manipulating an image. The following points will give you a better understanding of how to work with, create, and modify images for your own projects:)

The two ways to get an image…

1. A registry. A docker registry (i.e. registry.hub.docker.com) allows you to easily pull an entire image locally for utilizing to create other images or just start a container. This is the simplest way to get up and running quickly. If you are going to be doing a lot with images, especially creating your own, its a good idea to consider running an internal registry (just search for “docker-registry” containers to help you get started). There are also sites like Quay.io which allow you to run your own private registry:)

2. Build files/bundle. A build bundle is merely a tarball or repo of all dependencies to build an image. This can include merely a Dockerfile, or have extra source code to be build, run scripts, or anything else the image needs to contain for its particular purpose. When you pull down a build repo, you merely enter it and run

docker build -t myimage .

Creating an image from an existing image

Most people, at some point in time, will want to take an existing image and build they’re own based on it. There are a couple ways to do this:

1. Start building your own build directory and start the Dockerfile with a “FROM user/imagerepo” statement.

2. Starting a container, making desired changes, and committing those changes to an image.

For #1, this is fairly straightforward. When you build your image, it will inherit a parent images’ layers and then run through the rest of your dockerfile. For #2, however, this is a bit more tricky. Basically, you start a conatiner based off an image, lets say ubuntu:lastest, make your desired modifications in that container, exit, and then commit those changes creating a new container!

ubuntu -> Container (make changes) -> save changes to new image, myimage

What that looks like is this:

$ sudo docker run -t -i tatum/gentoo-stage3 /bin/bash
root@0b2616b0e5a8:/# echo "iptables-restore < /etc/iptables.conf" > /etc/rc.local
root@0b2616b0e5a8:/# exit
$ sudo docker commit -m="load iptables at boot" -a="Jon Doe" 0b2616b0e5a8 jdoe/gentoo-stage3:v2 
4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c

See, its actually really simple. Notice the id we use for the commit is just the id of the container we made changes in:)

Understanding tags…

Whenever you see something like “ubuntu:14.04″, thats merely a mark defining that the commit for that image, ubuntu. In our above example, we use a :v2 to designate that the image is not like the original one.
That’s really all it is! Just like tags in repositories, it merely tags a certain commit to make it stand out for whatever reason (version, special feature, etc..)

Tags in no way define how images are built off of one another, or anything along those lines.

Building images off of images: Understanding “base”

I recently got into some discussion with co-workers about progression of our containers and how to keep things fresh.

Lets say we have an image, “docker-gentoo”, which is just a basic gentoo install. The following lists images, each built from the one above it, that we create to aid users in starting at a certain point, for whatever purpose:

docker-gentoo-ssh (FROM docker-gentoo)
docker-gentoo-cron (FROM docker-gentoo-ssh)
docker-gentoo-haproxy (FROM docker-gentoo-cron)

So this is all fine and all. Now lets ask some questions:

Q1. What happens if we re-build the docker-gentoo image with a fresh stage3 tarball, and push it up to the registry….when docker-gentoo-cron is pulled down from the registry, does it automatically inherit those changes?
Q2. What if I remove docker-gentoo-cron from the registry? Will docker-gentoo-haproxy be broken when a user goes to pull it down?

Here are the answers, thanks to some cool dudes over @ #docker on freenode:

A1. When you push that original docker-gentoo image to the registry, the id of the topmost layer is saved. So all subsequent images using docker-gentoo (i.e. docker-gentoo-ssh) start from that ID (i.e. 0×9) and continue on when they are pushed to the registry. That means that when you “docker push docker-gentoo-ssh” to the registry, a network request is made for each layer of the image to compare it with what already exists in the registry. So the registry will figure out that it should only store the new layers since it already has the original ones (docker-gentoo).
Now you make your changes to docker-gentoo, push them (new ID 0×12), and then another user pulls down docker-gentoo-ssh shortly thereafter. This other user will not get a “upated” image since the registry will remember that original ID (i.e. 0×9, when the image was initially pushed) and serve the original layers that make up docker-gentoo-ssh. The only way to “update” docker-gentoo-ssh to inherit the new docker-gentoo, is to rebuild it and then re-push it to the registry. This also means, that you would work your way up the tree (-ssh, -cron, -haproxy) gradually rebuilding each image and re-pushing it to the registry in order to update all of them.
A2. No, when you remove docker-gentoo-cron from the registry, it will not remove its layers since docker-gentoo-haproxy still relies on them.

Hopefully you understand a bit better how images can be tamed:) Below is just an excert from my irc convo:

12:51   Sinjek : if ubuntu updates its /bin/bash due to shellshock, every single thing that said FROM ubuntu has to be rebuild. They're all vulnerable.
12:51 InAnimaT : but how does it know if you're just doing FROM ubuntu?
12:52   Sinjek : Because people don't download dockerfiles.
12:52 InAnimaT : does it keep track of the ID from which your image derived from
12:52   Sinjek : Yup
12:52   Sinjek : since each layer is a filesystem diff it has to do it that way; it's not smart enough to merge your layer onto a new layer
12:53   Sinjek : Docker automated builds / trusted builds fix that a little
12:53   Sinjek : there, you give dockerhub your Dockerfile and it rebuilds it when your FROM changes for you.

Getting a GUI on RHEL 6.3 Server

Fortunately, could only get server via the eval program….btw, I really dislike rhel.

1.

yum groupinstall "X Window System" Desktop


1.5 Edit /etc/inittab for runlevel 5 (change the 3 to a 5)
2. /etc/sysconfig/desktop needs to contain:

DISPLAYMANAGER=GNOME
DESKTOP=GNOME

3. Install vmware-tools (can do this via cli), mainly ./vmware*.pl (read the INSTALL)
4. Enjoy?

Overview of architecture names for x86 systems

A quick overview of architecture names for x86 systems:

* i386: The 80386 CPU has historically been the bare minimum for running Linux on a PC-compatible system. Consequently, a package for “i386″ is designed for maximum compatibility and can run on any x86-like system; a system that describes itself as “i386″ is either ancient or exotic, and can only be counted on to run i386 packages.

* i485, i586: Rarely-seen, these refer to the 80486 and Pentium (80586) CPUs. Almost nobody builds packages specifically for them, targeting either i386 or i686 instead.

* i686: This is the “P6″ microarchitecture, used by Intel for the Pentium Pro and newer. It’s generally considered the baseline for “modern” 32-bit x86 systems. An “i686″ package is designed to run on these systems (and will usually perform better than a “i386″ version); an “i686″ system can run any i386, i486, i586, or i686 package.

* x86_64: This is the 64-bit instruction set for x86 CPUs. An “x86_64″ package can only run on x86_64 systems; an “x86_64″ system requires special kernel and library support (which almost all mainstream Linux distros provide) to run packages from the i*86 family.

Configuring network interfaces on RHEL

So I don’t usually use a RHEL based machine, but recently was tasked with deploying one on our network.
After the normal install process, which I decided to do text based via adding the following to the boot statement for the default installer option (push “e” when it appears):

linux text

I decided to setup networking later, after the machine was fully installed. On ubuntu/debian based systems, /etc/network/interfaces defines the configuration for interfaces on the system. The equivalent to this on RHEL based systems is /etc/sysconfig/network-scripts/ifcfg- where is the interface name i.e. “eth0″

For my needs, I needed to do a static assignment including search paths and dns servers. After using this helpful rhel guide and doing some digging, I figured it out. Here is my final config for eth0!

DEVICE="eth0"
BOOTPROTO="none"
HWADDR="00:51:56:B8:B1:B8"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
UUID="82a66a94-b522-4ca2-9666-57bb4456256d"
IPADDR="10.0.11.8"
NETWORK="10.0.10.0"
NETMASK="255.255.254.0"
GATEWAY="10.0.10.1"
BROADCAST="10.0.11.255"
DNS1="10.3.0.12"
DNS2="10.3.1.14"
PEERDNS="yes"
DOMAIN="herpderp.com lolcakes.com nowaydude.com"

So this will properly bring up the interface at boot, apply the necessary ip address and netmask, set the gateway for the system, and throw the dns and search paths in /etc/resolv.conf :)

For DHCP, this would be even simpler:

DEVICE="eth0"
BOOTPROTO="dhcp"
HWADDR="00:51:56:B8:B1:B8"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
UUID="82a66a94-b522-4ca2-9666-57bb4456256d"

Note, that most of those fields existed already i.e. UUID, TYPE, etc..

Using docker: An Introductory guide (Part 1)

Docker gives you the ability to run linux containers or “chroot on steroids” which utilize a layered approach using device-mappper or aufs to enable users to create images, build containers off of them, and deploy applications quickly for both development and production (and maintain uniformity!)

Before we start, virtually any major service/application has be “dockertized” meaning at least one person has made a docker repo for it! For examples, just do searches like “docker-nginx” or “docker-powerdns”. Looking at the Dockerfile’s other people create can help you understand the best way to run an application.

The following key points will outline the basic structure in working with and using docker:

1) “(optional: docker repo) -> Dockerfile -> Image -> Container” OR “Docker Image from Registry -> Container”
2) A Dockerfile defines what to install, commands to run, and files to add to the final resulting image.

Think of it this way: A Dockerfile is not /etc/rc.local, but rather a set of instructions for building an ISO. A Dockerfile sits in a directory called a “build directory” or “build repo”. This build repo contains everything needed to build a docker image including sources for an app, a custom rc.local, or anything else you define in the Dockerfile to pull into the built image. Here is an example Dockerfile from my ZNC bouncer container:

FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive

# install ZNC build prerequisites
RUN apt-get update && \
        apt-get install -yq --force-yes coreutils g++ libssl-dev make

# build and install
ADD znc-1.4 /opt/znc-1.4
##RUN cd /opt && tar xzvf znc-1.4.tar.gz
RUN cd /opt/znc-1.4 && ./configure && make && make install

# clean up after the build
RUN rm -rf znc-1.4*
RUN apt-get remove -yq g++ libssl-dev make
RUN apt-get autoremove -yq && apt-get clean

# run ZNC as an unprivileged user
RUN useradd -m -d /opt/znc znc
ADD run /opt/znc/run
RUN chmod +x /opt/znc/run
RUN chown -R znc:znc /opt/znc

# run thie image as a ZNC server
USER znc
WORKDIR /opt/znc
CMD ["/opt/znc/run"]

Here we see commands to Docker itself that look like “RUN” or “ADD”. Some are self-explanatory, but there are a lot so you’ll want to keep this with you. In this example, ADD allows us to copy over files into the image from the current working directory, namely the source code.

At the end of each Dockerfile, it is necessary to run some sort of foreground program/command that will continue to run, ending only when you intend for the container to actually stop. Yes, a container doesn’t just stay running by itself, there has to be a continuing process.

3) To build a image named znc, you would ensure you currently reside in the build directory and run a

docker build -t znc .

The Docker daemon will instantly start building. Don’t worry if you see some red text fly by real quick…Get some cheez-its and come back to see the obvious result of your build. If a problem does occur, you’ll see exactly where docker was forced to stop and what the problem was.

Once you get an image built successfully, run

docker images

to see the completed image! Now, you can save this docker image to a “registry” which is merely a store for docker images. When you see a “FROM ubuntu:1404″, this merely says “pull down the ubuntu 14.04 image from the official docker registry and build on top of it the following contents of this dockerfile”….remember the layered approach:)

4) Now, you can easily create and start a container using the “docker run” command. For example, the way I would run the ZNC image we made above:

docker run -d -v /docker/docker-znc/znc_config:/opt/znc/znc_config -p 18000:18000 --name znc znc

Breaking this down:
a) We utilize a “-d” to tell the container to run in “detached mode”. What this means is your current console will not be pulled into the running processes` stdin,stdout,and stderr.
b) The “-v :” defines a volume on the host we want to mount in our container. In this case, I have my znc configuration directory in my build dir and just mount from there. Note that both the source and dest must be absolute paths and will be mounted read/write!
c) For our “-p 18000:18000″, we are merely defining a : port mapping. So port 18000 on the container will be directly mapped with port 18000 of the host. By default, docker will automagically make and manage the iptables nat rules to define these relations and push traffic through to your container. (this automation can be turned off)
d) Since its nice to be able to define a name for a container, we will just use simple “znc”. If you don’t define a name, one is made up for the container.
e) The last argument, just simply, “znc” actually defines the image we want to use. Note that since the image is local, there is no path or anything to use (docker handles all that, see /var/lib/docker if you’re interested). However, lets say you wanted to start a container from an image on the official registry, you could change this to something like “mario/docker-znc”!

This ends part 1. Part 2 will talk more about general practices with starting containers, saving/backing them up, and more!

Compress a MySQL database table

MySQL InnoDB engine based databases support compression of table data using zlib compression algorithm.

From the official documentation, it is quite easy to create or alter a table to support compression! It of course helps quite a bit with column’s you might plan to use which contain a lot of text (using the TEXT, MEDIUMTEXT, LONGTEXT column formats).

Here is how I altered my table using phpmyadmin (since I didn’t see an obvious place in the GUI to do it, I just ran the following SQL statements on the DB):

SET GLOBAL innodb_file_per_table=1;# MySQL returned an empty result set (i.e. zero rows).

SET GLOBAL innodb_file_format=Barracuda;# MySQL returned an empty result set (i.e. zero rows).

ALTER TABLE herpderp
 ROW_FORMAT=COMPRESSED;# MySQL returned an empty result set (i.e. zero rows).

As you can see, after clicking Go, I got the commented text saying things seem to be successfull:)

Re-defining a variable composing of dynamic variables

In one of my scripts, I wanted to create a log file that was dynamic based on how the script was running. For example, if the script was doing something like waiting on another script or generated an error for whatever reason, I wanted the log file name to change to reflect that.

I started with this. I used the $STATUS as the switch and called a function (setstatus) to change that based on what was passed (by another function or whatever etc..):

1. In a conf file (script.conf):

...
LOG="$NAME-$STATUS-$FILENAME.log"
...

2. In the actual script:

RUNNING="Running"  ## Doing this so I can easily define what the text looks like for each case
WAITING="Waiting"
ERROR="ERROR"
FINISHED="Finished"
STATUS="$RUNNING"

source script.conf
...
...
setstatus () {
# This function changes the status of the script and changes the log file appropriately (e.g. $NAME-$STATUS-$FILENAME.log)
# It takes in the new status to set. Don't call this unless you actually want to change the file while mid/end-script.

# Available status'es are:
#1# Running - the script is currently running
#2# Waiting - the script is waiting on another script to continue
#3# ERROR - the script has finished but with an error.
#4# Finished - the script has completely finished successfully

if [ "$LOG" ]; then
    OLDLOG="$LOG"
    case "$1" in
        1)  [ "$STATUS" != "$RUNNING" ] && STATUS="$RUNNING" && mv "$OLDLOG" "$LOG"
            ;;
        2)  [ "$STATUS" != "$WAITING" ] && STATUS="$WAITING" && mv "$OLDLOG" "$LOG"
            ;;
        3)  [ "$STATUS" != "$ERROR" ] && STATUS="$ERROR" && mv "$OLDLOG" "$LOG"
            ;;
        4)  [ "$STATUS" != "$FINISHED" ] && STATUS="$FINISHED" &&  mv "$OLDLOG" "$LOG"
            ;;
    esac
fi
}

3. What actually came from that was a non-working script. the mv command would get mad at me because I was trying to rename a file the same name it already had. Hence the $STATUS as shown in the filename wouldn’t ever change from “Running”…wonder why?
Turns out, just because you use a variable later on, doesn’t mean variables defined in it are re-eval’ed. Once its defined, the definition of said variables used in a variable are expanded to what they exist as at the time the variable is set. In my case, this is as soon as the script starts hence the “Running”.

4. How can we fix this…hmm. Maybe an eval “LOG=$LOG” will re-eval for us? Nope. Even if you throw a ‘\’ before the variable $STATUS, this doesn’t work. Why? Because the variable was already defined at the start of our script (as soon as we source the conf file).

5. What I did. I created a variable named LOGTEMPLATE, escaped out all variables in it, and then instead of an eval “LOG=$LOG”, I did a eval “LOG=$LOGTEMPLATE”.
When you run this, the LOG variable is set to the expanded $LOGTEMPLATE which then in turn expands the $STATUS and other inside variables. Hence whatever $STATUS is set to at that moment is what gets loaded into the $LOG variable…clever huh?

6. So now, my script looks like so:

script.conf:

...
LOGTEMPLATE="$NAME-\$STATUS-$FILENAME.log"
...

The script:

RUNNING="Running"  ## Doing this so I can easily define what the text looks like for each case
WAITING="Waiting"
ERROR="ERROR"
FINISHED="Finished"
STATUS="$RUNNING"

source script.conf
[ "$LOGTEMPLATE" ] && eval "LOG=$LOGTEMPLATE"
...
...
setstatus () {
# This function changes the status of the script and changes the log file appropriately (e.g. $NAME-$STATUS-$FILENAME.log)
# It takes in the new status to set. Don't call this unless you actually want to change the file while mid/end-script.

# Available status'es are:
#1# Running - the script is currently running
#2# Waiting - the script is waiting on another script to continue
#3# ERROR - the script has finished but with an error.
#4# Finished - the script has completely finished successfully

if [ "$LOG" ]; then
    OLDLOG="$LOG"
    case "$1" in
        1)  [ "$STATUS" != "$RUNNING" ] && STATUS="$RUNNING" && eval "LOG=$LOGTEMPLATE" && mv "$OLDLOG" "$LOG"
            ;;
        2)  [ "$STATUS" != "$WAITING" ] && STATUS="$WAITING" && eval "LOG=$LOGTEMPLATE" && mv "$OLDLOG" "$LOG"
            ;;
        3)  [ "$STATUS" != "$ERROR" ] && STATUS="$ERROR" && eval "LOG=$LOGTEMPLATE" && mv "$OLDLOG" "$LOG"
            ;;
        4)  [ "$STATUS" != "$FINISHED" ] && STATUS="$FINISHED" && eval "LOG=$LOGTEMPLATE" &&  mv "$OLDLOG" "$LOG"
            ;;
    esac
fi
}

Thanks linuxquestions.

Moving your local repo to a new remote repo

So the server I usually use to store my repo is going to be going offline. So I decided I wanted to just move it to bitbucket and not have to worry.

$ git remote
origin

So obviously we only have our remote origin.

git remote remove origin
git remote add origin git@bitbucket.org:username/repo.git

Then, lets push up our repo and all associated data with it (i.e. your commits)

git push -u origin --all # pushes up the repo and its refs for the first time
git push -u origin --tags # pushes up any tags

And thats it!

As a side note, I decided not to have dual remotes as it could get messy. Thanks for the warning, stackoverflow.

Migrating from Maildir to Gmail

0. delete old emails if necessary, turn off convo view (Settings > General) since this lumps everything together. (note, you can do things like “before:2014/4/29″)
1. Use getmail to pull all emails as mbox or Maildir format from your old email account. I initially did maildir hence why below I convert them to mbox for thunderbird.

I configured like this for pulling from our corporate exchange server:

[retriever]
type = SimpleIMAPSSLRetriever
server = exchange.herpderp.net
username = merry0@lolcakes.net
password = secret
mailboxes = ("Inbox", "Sent Items")

[destination]
type = Maildir
path = /home/merry0/mail/

[options]
verbose = 2
message_log = ~/.getmail/log

# do not alter messages
delivered_to = false
received = false

1a. My email’s from exchange had a pretty dire problem. For whatever reason (google didn’t tell me much on this), prepended to every address in every to field (on every single email), was ‘=SMTP:’ so a to line would look like “to:=SMTP:merry0@lolcakes.net”.

Yeah this was super annoying. To fix it, once everything was in maildir format, I ran this fancy commamd in the directory holding all the emails:

grep -rl '=SMTP:' * | xargs sed -i 's/=SMTP://g'

2. convert to mbox script (only if your email is in maildir format. if you are already in mbox, you’re fine)

This script is nice and simple, note that you may need to do a apt-get install python-mailbox. You can also see it here.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Frédéric Grosshans, 19 January 2012
Nathan R. Yergler, 6 June 2010
 
This file does not contain sufficient creative expression to invoke
assertion of copyright. No warranty is expressed or implied; use at
your own risk.
 
---
 
Uses Python's included mailbox library to convert mail archives from
maildir [http://en.wikipedia.org/wiki/Maildir] to
mbox [http://en.wikipedia.org/wiki/Mbox] format, icluding subfolder.
 
See http://docs.python.org/library/mailbox.html#mailbox.Mailbox for
full documentation on this library.
 
---
 
To run, save as md2mb.py and run:
 
$ python md2mb.py [maildir_path] [mbox_filename]
 
[maildir_path] should be the the path to the actual maildir (containing new,
cur, tmp, and the subfolders, which are hidden directories with names like
.subfolde.subsubfolder.subsubsbfolder);
 
[mbox_filename] will be newly created, as well as a [mbox_filename].sbd the
directory.
"""
 
import mailbox
import sys
import email
import os
 
def maildir2mailbox(maildirname, mboxfilename):
   """
   slightly adapted from maildir2mbox.py,
   Nathan R. Yergler, 6 June 2010

http://yergler.net/blog/2010/06/06/batteries-included-or-maildir-to-mbox-again/

   """
   # open the existing maildir and the target mbox file
   maildir = mailbox.Maildir(maildirname, email.message_from_file)
   mbox = mailbox.mbox(mboxfilename)
 
   # lock the mbox
   mbox.lock()
 
   # iterate over messages in the maildir and add to the mbox
   for msg in maildir:
       mbox.add(msg)
 
   # close and unlock
   mbox.close()
   maildir.close()
 
 
dirname=sys.argv[-2]
mboxname=sys.argv[-1]
print(dirname +' -> ' +mboxname)
mboxdirname=mboxname+'.sbd'
maildir2mailbox(dirname,mboxname)
if not os.path.exists(mboxdirname): os.makedirs(mboxdirname)
 
listofdirs=[dn for dn in os.walk(dirname).next()[1] if dn not in ['new', 'cur', 'tmp']]
for curfold in listofdirs:
   curlist=[mboxname]+curfold.split('.')
   curpath=os.path.join(*[dn+'.sbd' for dn in curlist if dn])
   if not os.path.exists(curpath): os.makedirs(curpath)
   print('| ' +curfold +' -> '+curpath[:-4])
   maildir2mailbox(os.path.join(dirname,curfold),curpath[:-4])
 
print('Done')

3. using thunderbird, add the ImportExportTools addon, create a Unix Movemail account and right click on it > ImportExportTools > Import mbox file

3a. Enable IMAP in your gmail settings. Then, add this account to Thunderbird. Standard IMAP SSL settings google uses (e.g. imap.gmail.com) apply here. They have a guide on this.

4. Select the emails you want from the mbox based account to be moved to google. First I did the emails sent by myself. Just right click > copy > Your Gmail Account > [Gmail]/Sent Mail. Then, I did the rest to [Gmail]/All Mail

[quote]NOTE: You may want to dump all your non-sent mail to a label instead of All Mail in case you have a ton of emails and want to track which ones you are migrating. In my case, the connection dropped somehow and Thunderbird didn’t just try again, it just kinda stopped.
When you do this, the messages are not going to your inbox automatically. This means once your messages are moved over, to ensure they exist in your inbox, select all the messages in your “migration” label and select Move to > Inbox. These messages will still keep the “migration” label but will also exist in your inbox![/quote]

5. Finally, you will have to re-apply your filters by searching again for emails matching that filter. Go through each filter, hit edit, and re-search for matching emails. Then just do the Continue > Update Filter to ensure the filter applies to the newly added emails.

Dealing with VPN on Linux Mint

Setting up VPN via NetworkManager in Linux Mint was actually really simple. But I kept having issues with DNS. I set up my VPN server to push down itself as the dns server to use and a domain-search domain.

1. The domain search domain from vpn was getting mixed in with the other search domains from the local dhcp server when the connection was on eth0.
2. So NetworkManager launches dnsmasq when it starts. (Note, there is no /etc/init.d/dnsmasq script or anything like that. It’s merely a built binary sitting in /usr/sbin that gets launched with a bunch of conf options at start) This also works similar with openvpn when you spawn a vpn connection. So on connect to the vpn, the dns address of the gateway gets loaded in dnsmasq just fine. But trying to ping one of my boxes on the local network to my vpn server/gateway, i got a response of boxname.firstsearchdomain.com which is not what I want.

How I fixed this:

1. Pushed “DOMAIN” instead of “DOMAIN-SEARCH” from the server/gateway.

push "dhcp-option DOMAIN house.com"
push "dhcp-option DNS 192.168.1.1"

2. You can turn off dnsmasq in /etc/NetworkManager/NetworkManager.conf if you want. Not a big deal if you don’t. Either way, the way this works is when you connect to the vpn, settings getting pushed down from the vpn server get loaded into dnsmasq.
3. Personally, I keep it on and enable caching since I want as little extra traffic going through the vpn as possible. You can do this by making a file in /etc/NetworkManager/dnsmasq.d/ and placing dnsmasq configuration parameters in there. dnsmasq, unlike most programs, take configuration parameters over command line parameters.
3a. I verify that dnsmasq is pulling my configuration parameters by watching syslog and seeing a line like: “started, version 2.66 cachesize 500″
4. You can see how NetworkManager starts dnsmasq by using “ps -ef | grep dnsmasq” and see what conf options are passed (manpage).
5. You can also see in syslog the full connection details etc..

Apr 25 16:01:51 minty NetworkManager[27583]: <info> VPN connection 'house' (IP4 Config Get) reply received from old-style plugin.
Apr 25 16:01:51 minty NetworkManager[27583]: <info> VPN Gateway: 1.2.3.4
Apr 25 16:01:51 minty NetworkManager[27583]: <info> Tunnel Device: tun0
Apr 25 16:01:51 minty NetworkManager[27583]: <info> IPv4 configuration:
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Gateway: 172.1.1.2
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Address: 172.1.1.3
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Prefix: 32
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Point-to-Point Address: 172.1.1.2
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Maximum Segment Size (MSS): 0
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Static Route: 192.168.1.0/24   Next Hop: 192.168.1.0
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Static Route: 172.1.1.1/32   Next Hop: 172.23.23.1
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Forbid Default Route: no
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal DNS: 192.168.1.1
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   DNS Domain: 'house.com'

6. If you are managing OpenVPN yourself, here are some scripts for managing your resolv.conf using resolvconf on connect/disconnect.

View this page for more info on dnsmasq/NetworkManager integration.