Our new Thick Clients: Journey to the Dell XPS 15

Introduction Both myself and co-founder, @seglberg, decided early in 2016 that it was time to rethink our workstations. We both had Thinkpad’s which were alright but lacked in performance and weren’t ready for the workload we presently required. While they’ve treated us well, we decided to look around and see what’s fresh in the laptop market, especially with the new Intel Skylake architecture available! With the new things we’re working on, it’s essential that we can quickly run compression, encryption, docker builds, and virtual machines, etc.

Continue reading ↦

Going public, converting to Arch, and being more Social

Going Public… Just over a week ago, my company rolled out our public presence: A fresh web site, LinkedIn profile, and even Twitter. I want to also thank all the wonderful people who have sent luck our way and those who have supported us thus far…you are awesome! Distro Change Moving on, I wanted to make mention of my recent decision to move to Arch Linux, an amazingly light, responsive, and elegant linux distribution.

Continue reading ↦

Hunky Dory

Late last week, I resigned from my position at Arbor Networks in order to join a stealth startup. Unfortunately, I didn’t get to say goodbye to any coworkers because of my discretion around details of the new company. Either way, I’m hitting the ground running at my new gig and having a blast! Don’t worry, we’ll be going public pretty soon so keep an eye out! I want to thank all the people who have already shown their support and reached out wishing me luck.

Continue reading ↦

Streaming large amounts of data!

I recently ran into a situation where I needed to copy a large amount of data from one box to another in a LAN environment. In a situation like this, the following things are usually true, at least for this project they were: Just creating a Tar archive on the source box and transferring it over isn’t gonna fly. The source contains many files and directories (in the millions); enough that its not even practical to use file based methods to move data over The Disk which data resides on is not exactly very “fast” and may be exceptionally “old” We need to maximize transfer speed and we don’t care about “syncing”; we just want a raw dump of the data from one place to another.

Continue reading ↦

Backing up to S3: The Containerized Way

I recently decided to jump into the object storage revolution (yeah, I’m a little late). This drive comes from my very old archives I’d like to store offsite but also to more easily streamline how I deploy applications which have things like data directories and databases that need to be backed up. The Customary Lately, through my work at Arbor and my own personal dabbling, I’ve come to love the idea that a service may depend on one or more containers to function.

Continue reading ↦

Handling Cron inside your container

Sometimes, you need an application to run at a scheduled time. Ideally, it would be a really cool feature if you could merely tell the docker daemon to do this via some sort of schedule: * 1 * * * in your docker-compose.yml. Sadly this isn’t really possible. So you have two options: Source your image from a container which has cron installed. Merely install cron yourself. Either way, there are a few things you need to watch out for.

Continue reading ↦

Redirection in HAProxy

I wanted to mention something I just setup at work. The just of this involves the need to support shortnames/searchdomains. This allows a user to type in “bugzilla/” in their browser instead of a FQDN i.e. “bugzilla.example.com”. Of course, the DNS search domain of “example.com” must be configured (either manually or via DHCP). Enter hdr_beg(host) Using HAProxy, we can actually do one of three things relating to the host header (there are more, but these are the ones we care about):

Continue reading ↦

Superfast NFS Tuning

In the past week at work, I’ve had a need to utilize some directly attached boxes working over NFS to share a storage array, the backblaze storage pod actually. This was necessary as the pods don’t have many compute resources to handle the load required to backup our datasets. Looking into this, I realized that optimizing NFS was an easy and surefire way to ensure it wasn’t taking extra resources on my pod.

Continue reading ↦

If you don't enable CDP, there's something wrong with you.

Hmmm….I wonder what switch port this box is connected to??? [~]> apt-get install cdpr Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: cdpr 0 upgraded, 1 newly installed, 0 to remove and 31 not upgraded. Need to get 17.4 kB of archives. After this operation, 102 kB of additional disk space will be used. Get:1 http://us.archive.ubuntu.com/ubuntu/ trusty/universe cdpr amd64 2.

Continue reading ↦

My First ZFS Experience: Taming 45 drives

At work, we have a couple Backblaze storage pods (version 3 with 4TB drives) that we use for backup purposes. They were obtained before my time because quick, bulk storage was necessary to backup our object storage platform, Swift. Sadly, the boxes were deployed in an unsatisfactory manner whereas all 45 drives were pooled together in one gigantic LVM formation, meaning any one disk could die and data loss would occur.

Continue reading ↦