Managing servers with Salt

While I wont go into detail here, as there are plenty of “getting started with Salt” sites out there, I thought I’d mention that I’ve been using Salt quite a bit in the last few months for a couple of different customers and it’s been brilliant.

Sure, it has its foibles, but what doesn’t. It’s a brilliant tool for managing and maintaining groups of servers.

Notice I say groups of servers. If you don’t have at least 4 hosts which need similar configuration, I wouldn’t bother, the overhead wouldn’t be worth it. But for 4 or more it’s definitely worth the effort. Even for just setting up the “common” elements of a host like SSH keys, base package sets, etc.

Beware though, you need to keep your Salt master SECURE. It holds the keys to the kingdom, having root access to all your minion host. Although I understand it is possible to set up the minions as a non-root user, this is unusual and I understand it’s not straight forward.

Salt is very similar to Puppet (and a few other host management tools) but is Free and very versatile. It can manage Linux and Windows hosts, although the available former is more fully featured/supported.

Adding tab completion to the Python shell

Python Django provides shell access for debugging, and that shell has tab completion which is _very_ useful. I wanted to enable the same functionality in the standard Python shell.

This post tells us how:
http://stackoverflow.com/questions/246725/how-do-i-add-tab-completion-to-the-python-shell

TLDR:

Create a file .pythonrc

# ~/.pythonrc
# enable syntax completion
try:
    import readline
except ImportError:
    print("Module readline not available.")
else:
    import rlcompleter
    readline.parse_and_bind("tab: complete")

then in your .bashrc file, add

export PYTHONSTARTUP=~/.pythonrc
I am now an AWS Certified Solutions Architect

I am now an AWS Certified Solutions Architect

Monitoring Bind9 with Zabbix

I’ve been using Nagios and Cacti for many years, but they have their annoyances, not least the fact that they’re independent so require configuration in 2 places.

So I’ve been moving over to using Zabbix. Don’t get me wrong, Zabbix can be a pain too, but it does alerting and graphing in one app, the configuration is sane, and the flexibility around alerts blows Nagios away.

So I had some rather complex and beautiful graphing of Bind statistics in my old Cacti and I wanted to reproduce it in Zabbix. I couldn’t find anything close to what I wanted, so I rolled up my sleeves and created it from scratch.

The result is a script, a config and a template which collects almost every statistic available from Bind via its HTTP/XML statistics, some autodiscovery of per-zone statistics and a bunch of pretty graphs.

The code and setup instructions can be found on the GitHub page:

https://github.com/Pesticles/Zabbix-Bind9-Statistics-Collection

Please feel free to clone it and submit changes back to me.

Installing an Updated Intel e1000e Gigabit Ethernet Driver Using DKMS

I’ve had some issues lately with Ubuntu and Intel NICs, specifically with VLANs. I’ve also heard anecdotal evidence that there are issues with the driver version shipped with the stock kernels and that the latest version available from Intel fixes the issue.

So I’ve downloaded the updated driver from here:

https://downloadcenter.intel.com/SearchResult.aspx?lang=eng&ProdId=2255

At the time this was written, the latest version was 3.1.0.2

It installs fine on my test Ubuntu 12.04 machine, so now I want to ensure it gets installed automatically whenever a new kernel gets installed (approximately every few months). To do this I enlist the help of DKMS, or Dynamic Kernel Module Support system.

You’ll be compiling the module so you’ll need to install “build-essential” and your kernel headers, for the stock kernel this is “linux-headers-server”. You’ll also need the “dkms” package.

Download the driver tarball from the intel website (above) and untar it into /usr/src. Mine untarred to /usr/src/e1000e-3.1.0.2/

Create the dkms.conf file in this directory as follows: (Note: Depending on your browser, you may need to correct the double quote characters “ after copying-and-pasting)

MAKE="make -C src/ BUILD_KERNEL=${kernelver}”
CLEAN=“make -C src/ clean”
BUILT_MODULE_NAME=e1000e
BUILT_MODULE_LOCATION=src/
DEST_MODULE_LOCATION=/kernel/drivers/net/ethernet/intel/
PACKAGE_NAME=e1000e
PACKAGE_VERSION=3.1.0.2
AUTOINSTALL=yes
REMAKE_INITRD=yes

Add it to DKMS:

dkms add -m e1000e -v 3.1.0.2

Test build:

dkms build -m e1000e -v 3.1.0.2

And install:

dkms install -m e1000e -v 3.1.0.2

The above build and install will do so for the currently running kernel, if you want to do so for a specific kernel, add “-k kernel-ver” where kernel-ver is the kernel version, for example “-k 3.2.0-70-generic”.

And you’re done. The module will automatically compile for each new kernel that gets installed.

Luke.

I’m not suggesting you should make a habbit of trusting packages from random websites, but if you want one, here is a Bash package compiled for Debian Lenny with the Shellshock patches.

Shrinking Raspberry Pi SD Card images for transfer/storage

As part of a project I’ve been working on, I’ve been building a customised version of Raspbian on 16GB SD cards. As all good engineers should do I’ve been taking snapshots of my work at various times by copying an image of the dev SD card to my backup storage.

However, each image is 16GB, the size of the card, but only contains about 2.5GB of actual data, the rest is free space in the Raspbian root filesystem.

I’d rather only store the data I need, so here’s a nifty script I whipped up to reduce the image size down to the bare minimum:

Here’s what it does:

  • Attaches the Linux partition in the image as a loopback device
  • Runs and fsck to check for consistency
  • Resizes the filesystem to the minimum possible
  • Disconnects the loopback device
  • Repartitions the image so the Linux partition is just larger than the newly resized filesystem within it
  • Truncates the image file to just after the end of the newly resized partition

The image is assumed to contain a FAT boot partition and an EXT partition only, in that order. I’m not sure how that plays with Noobs, I haven’t tested it.

It works well for me, bringing my 16GB images down to a little over 2.5GB, which can be reduced further still by gzipping or otherwize compressing the image file.

I hope you find it useful too.

Luke.

Zabbix packages for Raspberry Pi Raspbian

Recently I’ve been playing around with Zabbix, as a potential replacement for my existing Nagios+Cacti monitoring system. As part of this I’ve decided to instrument all own equipment through Zabbix, to get the hang of it.

The zabbix packages available for Debian Stable/Wheezy are somewhat out of date, v1.8 and although v2.2 is available through the backports repository I decided I may as well get them straight from the source, as Zabbix have a Debian repository of their own with the latest packages.

This is fine for i386 and x86_64 architectures, but they do not provide packages for the armhf architecture that Raspbian uses, nor is there a backports repo available (that I could find)

v1.8 is OK, but I wanted some specific features from 2.2 in my agent, so I decided I might as well compile them myself from source, and once I had, it made sense to share them in case anyone else had the same need.

They are available from the packages.osnz.co.nz repository, you can easily install the repo and it’s public key by downloading and installing:

http://packages.osnz.co.nz/debian/osnz-release-wheezy.deb

Once that’s done, you can install any of the zabbix packages by running “apt-get install zabbix-packagename

I plan to keep this repo up to date as the upstream zabbix repo releases source files.

Luke.

http://docopt.org/

I think docopt should be getting more attention than it is. If you’re a programer and have _ever_ lived the nightmare that is command line argument parsing, you’ll like this library.

Luke.

Raspberry Pi Timelapse

OK, so using a Raspberry Pi with a PiCamera to make timelapse videos is hardly breaking new ground. Everyone’s doing it. But mine works pretty well, so I thought I’d share it.

Also, the other day it captured this:

image

Which is cool.

Anyway, I go a little further than just taking the pictures, I also have an automated process which:

  1. Annotates each image with Temperature, Humidity and Barrometric pressure readings taken during the day, as well as the date and time
  2. Merges the images into a single massive AVI
  3. Compresses the AVI into a MKV using x264 encoding
  4. Upload the resulting clip to Youtube

My humble setup looks like this:

image

Thats a Pi in the bottom half of an Element14 Pi Case, some “Helping Hands” to hold things in the right place and an ethernet cable.

Since the images are being moved to a CIFS share once they’re taken, I only use need a small 8GB SD-Card with Raspbian installed.

You can find the code on Github: https://github.com/Pesticles/PiCamera

The Pi itself does just 2 things. Take the pictures and move them to a CIFS share on my Debian based NAS.

The first script, /home/pi/still.sh, is run every minute by cron during the relevant times of day (6am-8:59pm right now, longer in summer):

The second script, /home/pi/transfer.sh, runs every minute by cron and moves the images from local storage to the CIFS share mounted on /DATA:

The crontab entry for these is thus:

* 6-20 * * *    /home/pi/still.sh
* *    * * *    /home/pi/transfer.sh

This setup works very reliably. Even if the CIFS share goes away (as it does, often, when I’m tinkering with the NAS) the images just stack up on the SD card until it comes back.

Now we move on to the part that runs on the NAS. Obviously the parts related to the environmental data won’t be of much use to anyone else, but you can remove those reasonably easily.

This is the script which does all the heavy lifting, encode.sh, run by cron at 10:30pm each night:

The helper script process_image.py uses the Python Image library to add the date, time and environmental data to each frame:

The final helper script “upload_video.py” is provided by Google as part of the YouTube API. I won’t go into detail here, it’s worthy of a blog post all of it’s own.

Hopefully you can mangle the above to best suit your own needs. I’m very happy with the result. It’s reliable and turns out excellent quality videos. The only thing that’s lacking is automatically adding a soundtrack, but I’ll get to that too one day.

Luke.