When migrating from one server to another, migrating the sql database is usually advantageous. There are two ways to do this, dumping the old database to a file(s) and re-importing that file to the new server, or, copy the database files themselves over to the new server (/var/lib/mysql) making sure they are owned by the new mysql user!
If you perform the first method for migration/backup, it may be advantageous to also back up the mysql users you’ve created and granted access to certain db’s for.
Just a quick note, I found a fork of unetbootin called TuxBoot. I highly recommend it as it actually launched in Mint, unlike the latest unetbootin which:
Doesn’t find usb devices on osx.
Won’t even launch on certain linux distros for whatever reason? Tuxboot supports Linux and Windows….and worked for me the first time around:)
Docker gives you the ability to run linux containers or “chroot on steroids” which utilize a layered approach using device-mappper or aufs to enable users to create images, build containers off of them, and deploy applications quickly for both development and production (and maintain uniformity!)
Before we start, virtually any major service/application has been “*dockerized*” meaning at least one person has made a docker repo for it! For examples, just do searches like “*docker-nginx*” or “*docker-powerdns*”.
Hmm, I kind of want to play digdug now:(
If you have an IP you’d like to perform a reverse lookup on to find the domain associated with it, look no further than the very light and informational dig program:
dig +noall +answer -x 199.232.41.10
Note that PTR records are not required for IP addresses. If a PTR record is not defined for an IP address, you cannot do a remote DNS lookup.
So I needed to take in a pasted stdin that was multiple lines. Using just read or any of the flags it provides don’t give a nice clean way to do this. However, using cat and sed, you can provide a few nice ways to allow a user to end the input stream and submit their entry:
Ctrl+d:
#!/bin/bash echo "Pipe in certificate, or paste and it ctrl-d when done" keyvariable=$(cat) Blank Line:
Recently, it seems my NFS connections have been crashing hard, most recently, a saved file from my browser to my servers’ mount point via NFS which caused the VM I was using to completely lock up:( So I did some quick searching and found these options which you should use when mounting any NFS shares:
bg
If the first NFS mount attempt times out, retry the mount in the background.
MySQL InnoDB engine based databases support compression of table data using zlib compression algorithm.
From the official documentation, it is quite easy to create or alter a table to support compression! It of course helps quite a bit with column’s you might plan to use which contain a lot of text (using the TEXT, MEDIUMTEXT, LONGTEXT column formats).
Here is how I altered my table using phpmyadmin (since I didn’t see an obvious place in the GUI to do it, I just ran the following SQL statements on the DB):
Using 7z:
7z l file.7z 7z x file.7z directory/neededfile.txt
Using Tar:
tar ztvf file.tgz tar xzvf file.tgz directory/neededfile.txt
Note the “*t*” is the argument telling tar to list files, so you could do “*jtvf*” for a bz2 archive, etc…
So i’ve recently done more research into raid array performance and the variables that impact speed. I’ve found some really good articles regarding this discussing mainly chunk size but also stripe width, strides, and all kinds of fun terms you didn’t know existed. Plus, I found some useful info on how to properly format a raid array for optimaal perf!
Disk Performance, Part 2: RAID Layouts and Stripe Sizing