Storage


Changing UUID's on LUKS Encrypted Partitions

I recently obtained a disk dock and cloning unit (StarTech.com) for working with some of my internal drives (I have too many). This unit does a bit-by-bit clone of one disk to another, which is really useful! The problem with this is that each disk now looks exactly the same to your Operating System, meaning there is no way to mount them both at the same time! Furthermore, I decided to create a LUKS encrypted drive protecting an ext4 partition.

Continue reading ↦

Streaming large amounts of data!

I recently ran into a situation where I needed to copy a large amount of data from one box to another in a LAN environment. In a situation like this, the following things are usually true, at least for this project they were: Just creating a Tar archive on the source box and transferring it over isn’t gonna fly. The source contains many files and directories (in the millions); enough that its not even practical to use file based methods to move data over The Disk which data resides on is not exactly very “fast” and may be exceptionally “old” We need to maximize transfer speed and we don’t care about “syncing”; we just want a raw dump of the data from one place to another.

Continue reading ↦

Backing up to S3: The Containerized Way

I recently decided to jump into the object storage revolution (yeah, I’m a little late). This drive comes from my very old archives I’d like to store offsite but also to more easily streamline how I deploy applications which have things like data directories and databases that need to be backed up. The Customary Lately, through my work at Arbor and my own personal dabbling, I’ve come to love the idea that a service may depend on one or more containers to function.

Continue reading ↦

Superfast NFS Tuning

In the past week at work, I’ve had a need to utilize some directly attached boxes working over NFS to share a storage array, the backblaze storage pod actually. This was necessary as the pods don’t have many compute resources to handle the load required to backup our datasets. Looking into this, I realized that optimizing NFS was an easy and surefire way to ensure it wasn’t taking extra resources on my pod.

Continue reading ↦

My First ZFS Experience: Taming 45 drives

At work, we have a couple Backblaze storage pods (version 3 with 4TB drives) that we use for backup purposes. They were obtained before my time because quick, bulk storage was necessary to backup our object storage platform, Swift. Sadly, the boxes were deployed in an unsatisfactory manner whereas all 45 drives were pooled together in one gigantic LVM formation, meaning any one disk could die and data loss would occur.

Continue reading ↦

Keeping NFS from crashing hard

Recently, it seems my NFS connections have been crashing hard, most recently, a saved file from my browser to my servers’ mount point via NFS which caused the VM I was using to completely lock up:( So I did some quick searching and found these options which you should use when mounting any NFS shares: bg If the first NFS mount attempt times out, retry the mount in the background.

Continue reading ↦

Some information regarding RAID striping and performance

So i’ve recently done more research into raid array performance and the variables that impact speed. I’ve found some really good articles regarding this discussing mainly chunk size but also stripe width, strides, and all kinds of fun terms you didn’t know existed. Plus, I found some useful info on how to properly format a raid array for optimaal perf! Disk Performance, Part 2: RAID Layouts and Stripe Sizing

Continue reading ↦

Listing out your drives, the pretty way

Just found the command I’ve forever been looking for: goliath# blkid -o list device fs_type label mount point UUID -------------------------------------------------------------------------------------------------------- /dev/sda2 ext4 golvm /mnt/ssd2 158dac38-a368-4a37-983e-8e4b63cc838f /dev/sdd linux_raid_member goliath:1 (in use) 4c9df4da-6def-7a1b-f269-1137c0c49112 /dev/sdb1 ext4 / d288026a-a2d2-45c0-b848-3ac032909b33 /dev/md0 ext4 /mnt/raid 5193fa69-3c56-46cd-90bd-31036c931f5e /dev/sda1 ext4 /mnt/ssd1 bc2503ac-ad7a-4c70-8127-6ed37c96548f /dev/sdl1 ext4 /mnt/usb becc31d7-35ff-4145-876a-2520460ff532 /dev/sdi linux_raid_member goliath:1 (in use) 4c9df4da-6def-7a1b-f269-1137c0c49112 /dev/sdk linux_raid_member goliath:1 (in use) 4c9df4da-6def-7a1b-f269-1137c0c49112 /dev/sdh linux_raid_member goliath:1 (in use) 4c9df4da-6def-7a1b-f269-1137c0c49112 /dev/sdj linux_raid_member goliath:1 (in use) 4c9df4da-6def-7a1b-f269-1137c0c49112 /dev/sdf linux_raid_member (in use) c2e53423-5bc2-a1e6-fcbf-496432a662fa /dev/sdc1 ext4 /mnt/500dump 05fe6113-5433-45b1-9fb6-2346d94534b0 /dev/md1 jfs (not mounted) 9413d08a-fd5d-4f26-a876-198565f5e392 goliath#



Setting up a hard quota with a directory (on XFS)

So I needed to do per-directory hard quotas for my users. Luckily, xfs supports “project” quotas which allow a directory hierarchy to be soft or hard limited. To set this up for a user, lets call him derpface: Add the option “*pquota*” to fstab so the xfs partition gets mounted with project quotas enabled. Run the following commands to setup the project and directory declarations: mkdir /srv/derpface echo 1600:/srv/derpface >> /etc/projects echo derpface:1600 >> /etc/projid Note that the “1600?

Continue reading ↦

NFS username mapping even with same uid

Ran into this with my internal testing boxes. Basically, I would mount a simple insecure uid/gid mapped share: /mnt 192.168.1.0/24(rw,all_squash,insecure,no_subtree_check,anonuid=1000,anongid=1000) on a testbox with a username different from the one of the server. But, the uid was the same (1000). With NFSv3, this would have been fine. With NFSv4, there are some differences. Hence things like ssh keys would not work because while I could remove and create new files, the files that existed there were still not technically mine?

Continue reading ↦

Rescan SCSI devices connected to your system

So I have an Areca 1680ix raid card which I use to interface all of my disks with my operating system (usually pass-through). Since I do a lot of hot-swapping, it is necessary for me to rescan the scsi devices connected for the operating system to recognize them and give them a drive letter. Additionally, you don’t want this to disable any of your currently mounted drives. A lot of sites tell you to unload and relaod the kernel module that represents your raid card driver.

Continue reading ↦