In the past week at work, I’ve had a need to utilize some directly attached boxes working over NFS to share a storage array, the backblaze storage pod actually. This was necessary as the pods don’t have many compute resources to handle the load required to backup our datasets.

Looking into this, I realized that optimizing NFS was an easy and surefire way to ensure it wasn’t taking extra resources on my pod. Plus, I knew I didn’t exactly need error checking along the way since this wasn’t a public network, merely a 192.168.1.0/24 with only two hosts (you could also do a /30).

Note that the following options are in a controlled environment. These may not be optimal for you and are in no way “secure”.

The Server

Lets take a look at how I setup the server in /etc/exports:

/backup 192.168.1.0/24(rw,async,insecure,no_subtree_check,no_auth_nlm)

Let’s talk about the non-obvious options:

async - This give us the ability to tell NFS not to wait for the incoming data to actually be flushed to disk before returning an “ok” to the client. Note that this is probably dangerous if the server doesn’t have enough RAM to support the caching needed at the transfer speeds you’re pushing.

insecure - This option merely provides the ability for a client to use a port higher than 1024.

no_subtree_check - This is actually the default but I set it here so NFS doesn’t remind me everytime I reload my configuration.

no_auth_nlm - With this, we disable requiring auth for a client to perform locking requests.

You can read more about available options here.

The Client

It’s important to note a lot of the key options we care about are going to exist in the client configuration. Here is my /etc/fstab:

storage-pod:/backup	/backup	nfs vers=3,rw,async,noatime,nodiratime,udp	0	0

Again, lets hit on some of the options here:

vers=3 - Long story short, in Ubuntu, I wasn’t able to enable udp mode for the default, NFSv4.

noatime/nodiratime - Disable the access time attribute.

udp - Tell NFS to utilize the UDP protocol instead of the usualy default of TCP. This can be dangerous and is usually not recommended. However, because of my environment, I’m able to accept those consequences, which should be mininal since this is a direct-connect. Here is some intense discussion on using udp with NFS and the implications that go with it.

Both

On both internal interfaces, we enabled jumbo frames:

auto eth1
iface eth1 inet static
    address 192.168.1.1
    netmask 255.255.255.0
    mtu 9000

Results

With these settings, we are getting easily over 100MBps, no problem. This link is 1G Ethernet going across a 3ft cable.

In terms of our server, it seems to be handling the load pretty well. It looks like on average, nfs and zfs processes are eating well over one core. Additionally, memory seems to be mostly used up but there is only 8GB, so not a huge deal.

Mario Loria is a builder of diverse infrastructure with modern workloads on both bare-metal and cloud platforms. He's traversed roles in system administration, network engineering, and DevOps. You can learn more about him here.