Archive for the ‘ Guides ’ Category

Compress a MySQL database table

MySQL InnoDB engine based databases support compression of table data using zlib compression algorithm.

From the official documentation, it is quite easy to create or alter a table to support compression! It of course helps quite a bit with column’s you might plan to use which contain a lot of text (using the TEXT, MEDIUMTEXT, LONGTEXT column formats).

Here is how I altered my table using phpmyadmin (since I didn’t see an obvious place in the GUI to do it, I just ran the following SQL statements on the DB):

SET GLOBAL innodb_file_per_table=1;# MySQL returned an empty result set (i.e. zero rows).

SET GLOBAL innodb_file_format=Barracuda;# MySQL returned an empty result set (i.e. zero rows).

ALTER TABLE herpderp
 ROW_FORMAT=COMPRESSED;# MySQL returned an empty result set (i.e. zero rows).

As you can see, after clicking Go, I got the commented text saying things seem to be successfull:)

Re-defining a variable composing of dynamic variables

In one of my scripts, I wanted to create a log file that was dynamic based on how the script was running. For example, if the script was doing something like waiting on another script or generated an error for whatever reason, I wanted the log file name to change to reflect that.

I started with this. I used the $STATUS as the switch and called a function (setstatus) to change that based on what was passed (by another function or whatever etc..):

1. In a conf file (script.conf):

...
LOG="$NAME-$STATUS-$FILENAME.log"
...

2. In the actual script:

RUNNING="Running"  ## Doing this so I can easily define what the text looks like for each case
WAITING="Waiting"
ERROR="ERROR"
FINISHED="Finished"
STATUS="$RUNNING"

source script.conf
...
...
setstatus () {
# This function changes the status of the script and changes the log file appropriately (e.g. $NAME-$STATUS-$FILENAME.log)
# It takes in the new status to set. Don't call this unless you actually want to change the file while mid/end-script.

# Available status'es are:
#1# Running - the script is currently running
#2# Waiting - the script is waiting on another script to continue
#3# ERROR - the script has finished but with an error.
#4# Finished - the script has completely finished successfully

if [ "$LOG" ]; then
    OLDLOG="$LOG"
    case "$1" in
        1)  [ "$STATUS" != "$RUNNING" ] && STATUS="$RUNNING" && mv "$OLDLOG" "$LOG"
            ;;
        2)  [ "$STATUS" != "$WAITING" ] && STATUS="$WAITING" && mv "$OLDLOG" "$LOG"
            ;;
        3)  [ "$STATUS" != "$ERROR" ] && STATUS="$ERROR" && mv "$OLDLOG" "$LOG"
            ;;
        4)  [ "$STATUS" != "$FINISHED" ] && STATUS="$FINISHED" &&  mv "$OLDLOG" "$LOG"
            ;;
    esac
fi
}

3. What actually came from that was a non-working script. the mv command would get mad at me because I was trying to rename a file the same name it already had. Hence the $STATUS as shown in the filename wouldn’t ever change from “Running”…wonder why?
Turns out, just because you use a variable later on, doesn’t mean variables defined in it are re-eval’ed. Once its defined, the definition of said variables used in a variable are expanded to what they exist as at the time the variable is set. In my case, this is as soon as the script starts hence the “Running”.

4. How can we fix this…hmm. Maybe an eval “LOG=$LOG” will re-eval for us? Nope. Even if you throw a ‘\’ before the variable $STATUS, this doesn’t work. Why? Because the variable was already defined at the start of our script (as soon as we source the conf file).

5. What I did. I created a variable named LOGTEMPLATE, escaped out all variables in it, and then instead of an eval “LOG=$LOG”, I did a eval “LOG=$LOGTEMPLATE”.
When you run this, the LOG variable is set to the expanded $LOGTEMPLATE which then in turn expands the $STATUS and other inside variables. Hence whatever $STATUS is set to at that moment is what gets loaded into the $LOG variable…clever huh?

6. So now, my script looks like so:

script.conf:

...
LOGTEMPLATE="$NAME-\$STATUS-$FILENAME.log"
...

The script:

RUNNING="Running"  ## Doing this so I can easily define what the text looks like for each case
WAITING="Waiting"
ERROR="ERROR"
FINISHED="Finished"
STATUS="$RUNNING"

source script.conf
[ "$LOGTEMPLATE" ] && eval "LOG=$LOGTEMPLATE"
...
...
setstatus () {
# This function changes the status of the script and changes the log file appropriately (e.g. $NAME-$STATUS-$FILENAME.log)
# It takes in the new status to set. Don't call this unless you actually want to change the file while mid/end-script.

# Available status'es are:
#1# Running - the script is currently running
#2# Waiting - the script is waiting on another script to continue
#3# ERROR - the script has finished but with an error.
#4# Finished - the script has completely finished successfully

if [ "$LOG" ]; then
    OLDLOG="$LOG"
    case "$1" in
        1)  [ "$STATUS" != "$RUNNING" ] && STATUS="$RUNNING" && eval "LOG=$LOGTEMPLATE" && mv "$OLDLOG" "$LOG"
            ;;
        2)  [ "$STATUS" != "$WAITING" ] && STATUS="$WAITING" && eval "LOG=$LOGTEMPLATE" && mv "$OLDLOG" "$LOG"
            ;;
        3)  [ "$STATUS" != "$ERROR" ] && STATUS="$ERROR" && eval "LOG=$LOGTEMPLATE" && mv "$OLDLOG" "$LOG"
            ;;
        4)  [ "$STATUS" != "$FINISHED" ] && STATUS="$FINISHED" && eval "LOG=$LOGTEMPLATE" &&  mv "$OLDLOG" "$LOG"
            ;;
    esac
fi
}

Thanks linuxquestions.

Moving your local repo to a new remote repo

So the server I usually use to store my repo is going to be going offline. So I decided I wanted to just move it to bitbucket and not have to worry.

$ git remote
origin

So obviously we only have our remote origin.

git remote remove origin
git remote add origin git@bitbucket.org:username/repo.git

Then, lets push up our repo and all associated data with it (i.e. your commits)

git push -u origin --all # pushes up the repo and its refs for the first time
git push -u origin --tags # pushes up any tags

And thats it!

As a side note, I decided not to have dual remotes as it could get messy. Thanks for the warning, stackoverflow.

Migrating from Maildir to Gmail

0. delete old emails if necessary, turn off convo view (Settings > General) since this lumps everything together. (note, you can do things like “before:2014/4/29″)
1. Use getmail to pull all emails as mbox or Maildir format from your old email account. I initially did maildir hence why below I convert them to mbox for thunderbird.

I configured like this for pulling from our corporate exchange server:

[retriever]
type = SimpleIMAPSSLRetriever
server = exchange.herpderp.net
username = merry0@lolcakes.net
password = secret
mailboxes = ("Inbox", "Sent Items")

[destination]
type = Maildir
path = /home/merry0/mail/

[options]
verbose = 2
message_log = ~/.getmail/log

# do not alter messages
delivered_to = false
received = false

1a. My email’s from exchange had a pretty dire problem. For whatever reason (google didn’t tell me much on this), prepended to every address in every to field (on every single email), was ‘=SMTP:’ so a to line would look like “to:=SMTP:merry0@lolcakes.net”.

Yeah this was super annoying. To fix it, once everything was in maildir format, I ran this fancy commamd in the directory holding all the emails:

grep -rl '=SMTP:' * | xargs sed -i 's/=SMTP://g'

2. convert to mbox script (only if your email is in maildir format. if you are already in mbox, you’re fine)

This script is nice and simple, note that you may need to do a apt-get install python-mailbox. You can also see it here.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Frédéric Grosshans, 19 January 2012
Nathan R. Yergler, 6 June 2010
 
This file does not contain sufficient creative expression to invoke
assertion of copyright. No warranty is expressed or implied; use at
your own risk.
 
---
 
Uses Python's included mailbox library to convert mail archives from
maildir [http://en.wikipedia.org/wiki/Maildir] to
mbox [http://en.wikipedia.org/wiki/Mbox] format, icluding subfolder.
 
See http://docs.python.org/library/mailbox.html#mailbox.Mailbox for
full documentation on this library.
 
---
 
To run, save as md2mb.py and run:
 
$ python md2mb.py [maildir_path] [mbox_filename]
 
[maildir_path] should be the the path to the actual maildir (containing new,
cur, tmp, and the subfolders, which are hidden directories with names like
.subfolde.subsubfolder.subsubsbfolder);
 
[mbox_filename] will be newly created, as well as a [mbox_filename].sbd the
directory.
"""
 
import mailbox
import sys
import email
import os
 
def maildir2mailbox(maildirname, mboxfilename):
   """
   slightly adapted from maildir2mbox.py,
   Nathan R. Yergler, 6 June 2010

http://yergler.net/blog/2010/06/06/batteries-included-or-maildir-to-mbox-again/

   """
   # open the existing maildir and the target mbox file
   maildir = mailbox.Maildir(maildirname, email.message_from_file)
   mbox = mailbox.mbox(mboxfilename)
 
   # lock the mbox
   mbox.lock()
 
   # iterate over messages in the maildir and add to the mbox
   for msg in maildir:
       mbox.add(msg)
 
   # close and unlock
   mbox.close()
   maildir.close()
 
 
dirname=sys.argv[-2]
mboxname=sys.argv[-1]
print(dirname +' -> ' +mboxname)
mboxdirname=mboxname+'.sbd'
maildir2mailbox(dirname,mboxname)
if not os.path.exists(mboxdirname): os.makedirs(mboxdirname)
 
listofdirs=[dn for dn in os.walk(dirname).next()[1] if dn not in ['new', 'cur', 'tmp']]
for curfold in listofdirs:
   curlist=[mboxname]+curfold.split('.')
   curpath=os.path.join(*[dn+'.sbd' for dn in curlist if dn])
   if not os.path.exists(curpath): os.makedirs(curpath)
   print('| ' +curfold +' -> '+curpath[:-4])
   maildir2mailbox(os.path.join(dirname,curfold),curpath[:-4])
 
print('Done')

3. using thunderbird, add the ImportExportTools addon, create a Unix Movemail account and right click on it > ImportExportTools > Import mbox file

3a. Enable IMAP in your gmail settings. Then, add this account to Thunderbird. Standard IMAP SSL settings google uses (e.g. imap.gmail.com) apply here. They have a guide on this.

4. Select the emails you want from the mbox based account to be moved to google. First I did the emails sent by myself. Just right click > copy > Your Gmail Account > [Gmail]/Sent Mail. Then, I did the rest to [Gmail]/All Mail

[quote]NOTE: You may want to dump all your non-sent mail to a label instead of All Mail in case you have a ton of emails and want to track which ones you are migrating. In my case, the connection dropped somehow and Thunderbird didn’t just try again, it just kinda stopped.
When you do this, the messages are not going to your inbox automatically. This means once your messages are moved over, to ensure they exist in your inbox, select all the messages in your “migration” label and select Move to > Inbox. These messages will still keep the “migration” label but will also exist in your inbox![/quote]

5. Finally, you will have to re-apply your filters by searching again for emails matching that filter. Go through each filter, hit edit, and re-search for matching emails. Then just do the Continue > Update Filter to ensure the filter applies to the newly added emails.

Dealing with VPN on Linux Mint

Setting up VPN via NetworkManager in Linux Mint was actually really simple. But I kept having issues with DNS. I set up my VPN server to push down itself as the dns server to use and a domain-search domain.

1. The domain search domain from vpn was getting mixed in with the other search domains from the local dhcp server when the connection was on eth0.
2. So NetworkManager launches dnsmasq when it starts. (Note, there is no /etc/init.d/dnsmasq script or anything like that. It’s merely a built binary sitting in /usr/sbin that gets launched with a bunch of conf options at start) This also works similar with openvpn when you spawn a vpn connection. So on connect to the vpn, the dns address of the gateway gets loaded in dnsmasq just fine. But trying to ping one of my boxes on the local network to my vpn server/gateway, i got a response of boxname.firstsearchdomain.com which is not what I want.

How I fixed this:

1. Pushed “DOMAIN” instead of “DOMAIN-SEARCH” from the server/gateway.

push "dhcp-option DOMAIN house.com"
push "dhcp-option DNS 192.168.1.1"

2. You can turn off dnsmasq in /etc/NetworkManager/NetworkManager.conf if you want. Not a big deal if you don’t. Either way, the way this works is when you connect to the vpn, settings getting pushed down from the vpn server get loaded into dnsmasq.
3. Personally, I keep it on and enable caching since I want as little extra traffic going through the vpn as possible. You can do this by making a file in /etc/NetworkManager/dnsmasq.d/ and placing dnsmasq configuration parameters in there. dnsmasq, unlike most programs, take configuration parameters over command line parameters.
3a. I verify that dnsmasq is pulling my configuration parameters by watching syslog and seeing a line like: “started, version 2.66 cachesize 500″
4. You can see how NetworkManager starts dnsmasq by using “ps -ef | grep dnsmasq” and see what conf options are passed (manpage).
5. You can also see in syslog the full connection details etc..

Apr 25 16:01:51 minty NetworkManager[27583]: <info> VPN connection 'house' (IP4 Config Get) reply received from old-style plugin.
Apr 25 16:01:51 minty NetworkManager[27583]: <info> VPN Gateway: 1.2.3.4
Apr 25 16:01:51 minty NetworkManager[27583]: <info> Tunnel Device: tun0
Apr 25 16:01:51 minty NetworkManager[27583]: <info> IPv4 configuration:
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Gateway: 172.1.1.2
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Address: 172.1.1.3
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Prefix: 32
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal Point-to-Point Address: 172.1.1.2
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Maximum Segment Size (MSS): 0
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Static Route: 192.168.1.0/24   Next Hop: 192.168.1.0
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Static Route: 172.1.1.1/32   Next Hop: 172.23.23.1
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Forbid Default Route: no
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   Internal DNS: 192.168.1.1
Apr 25 16:01:51 minty NetworkManager[27583]: <info>   DNS Domain: 'house.com'

6. If you are managing OpenVPN yourself, here are some scripts for managing your resolv.conf using resolvconf on connect/disconnect.

View this page for more info on dnsmasq/NetworkManager integration.

Adding a new service/daemon manually on ubuntu

So I added a shiny new znc init script to /etc/init.d/ and I want to enable it so I can use upstart to start and stop the service and control it at different runlevels.

First, ensure the script is 755, then lets use chkconfig to get this loaded up!

If you don’t already have it:

apt-get install chkconfig

then do the following:

chkconfig –add znc
chkconfig –level 2345 znc on
service znc start

And yay!!!

I’ve solved it! How to ensure your local repo is a exact 1:1 copy of its remote!

git fetch origin
git reset --hard origin/master
git clean -dffx

If you run the above commands instead of just a git pull, your repo will be synced exactly (1:1) with its remote. This means:

1. Any new files/folders you created will be removed, EVEN SUB-REPO’s (repo’s inside your repo).
2. Any files you’ve changed that already existed in the repo will NOT have their changes stashed/saved.
3. BE CAREFUL. Something you might have been working on will get rm’ed.

Oh and as a side note, when doing git add’s, it is wise to do a “git add -A” as this will include any removals as well. So for me, how I have this setup in my zshrc:

function update () {
    cur=$(pwd); cd ~/env; git fetch origin; git reset --hard origin/master; git clean -dffx; cd $cur; cur=''; source ~/.zshrc
}

Pretty nifty huh?

Linkage that helped me: one two

Convert a Digital Ocean droplet to a VMware VM

The following guide explains a method for converting a Digital Ocean Droplet to a VMDK which can be used under VMware ESXi Hypervisor or other virtualization software.
This process is one way. It is currently impossible to convert a VMDK to a Digital Ocean Droplet.

View it here (pdf)!

Changing your irssi config when using znc

So I had the normal irssi config stuff all defined and all setup from my non-bouncer days. So I figured, with the bouncer, it shouldn’t be hard to get changed over etc..

Here is what I had pre-bouncer:

servers= {
{
    address = "server.com";
    chatnet = "derp";
    port = "1820";
    use_ssl = "yes";
    ssl_verify = "no";
    autoconnect = "Yes";
  },

);

chatnets = {
  derp = {
    type = "IRC";
    nick = "username";
    };

After much raging, I finally figured out how to configure this to use my znc account on my bouncer (which has ssl enabled and port 1234, it also supports multiple networks which are defined after your username):

servers = (
  {
    ## My bouncer!!!
    address = "znc.server.com";
    password = "username/networkname:password";
    chatnet = "derp";
    port = "1234";
    use_ssl = "yes";
    ssl_verify = "no";
    autoconnect = "Yes";
  },

);
chatnets = {
derp = {type = "IRC";};
};

EDIT: Lets say you do have another network setup with your znc bouncer. Here’s how that would look in your irssi config if you wanted to connect to both:

servers = (
  {
    ## My bouncer!!!
    address = "znc.server.com";
    password = "username/networkname:password";
    chatnet = "derp";
    port = "1234";
    use_ssl = "yes";
    ssl_verify = "no";
    autoconnect = "Yes";
  },


  {
    ## My bouncer connection for second network!!!!
    address = "znc.server.com";
    password = "username/secondnetworkname:password";
    chatnet = "anotherone";
    port = "1234";
    use_ssl = "yes";
    ssl_verify = "no";
    autoconnect = "Yes";
  },

);
chatnets = {
derp = {type = "IRC";};
anotherone = {type = "IRC";};
};

NOTE: If you have a server block but you don’t want to join that server, instead of commenting out the entire block, just change autoconnect to “no”.

and you will NOT be joined to that server. Pretty nifty if you have a ton of servers in your config but don’t want to join all of them upon irssi start.

Using Hping

hping is nice for sending a bunch of traffic just to test certain reactions of remote machines etc.. Here are some cool ways to use it and some useful links on more info.

Start a flood of icmp packets with a rand src:

hping 16.0.24.2 --rand-source --flood --icmp -V<pre>

Do 500pps with just one src:
<pre>hping 16.0.24.2 -i u2000 --icmp -V

Here, we use the -i u2000 which tells hping to send a packet every 2000us which is 500 packets per second. You can figure out what this value should be by doing “1e+6/pps”

Through the –flood option, I can get around 400kpps out of a 10G link (with those icmp packets).

Some linkage:

http://www.rationallyparanoid.com/articles/hping.html

http://www.radarhack.com/tutorial/hping2.pdf