2012-12-21

Vi IMproved

Ubuntu tweak #2 - diediedie nano die!

$ sudo update-alternatives --config editor
There are 4 choices for the alternative editor (providing /usr/bin/editor).

  Selection    Path                Priority   Status
------------------------------------------------------------
* 0            /bin/nano            40        auto mode
  1            /bin/ed             -100       manual mode
  2            /bin/nano            40        manual mode
  3            /usr/bin/vim.basic   30        manual mode
  4            /usr/bin/vim.tiny    10        manual mode

Press enter to keep the current choice[*], or type selection number: 3
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in manual mode

Ahh, much better!

2012-12-20

Disable dnsmasq in NetworkManager

I have recently converted my work desktop to Ubuntu 12.10. Most things were better, but I was seeing horrible DNS lag from dnsmasq. To disable it, I've done the following:

sudo vi /etc/NetworkManager/NetworkManager.conf
# comment out dns=dnsmasq
sudo restart network-manager

This will regenerate your resolv.conf and you'll see your DNS servers directly and not localhost.

2012-12-11

Mageia2 on EC2: Stormy Weather


This is my third post on getting Mageia2 running on Amazon Web Services' Elastic Compute Cloud. See my first post in the series for an overview.

In my last post I described how to create and upload an AMI that allows you to run Mageia2 on EC2. There were two issues with that method:
  1. The EC2 instances are using a one-off unverified kernel, obtained for testing purposes only.
  2. The instances launched can only be instance-store backed, ephemeral disk.
Both of these problems are solvable. We'll address the kernel first. The solution? Compile your own!

As with any good open-source project, you can easily obtain the source code. The same holds for Mageia. For recompiling the kernel, I plucked the kernel's SRPM file off mirrors.kernel.org. Reviewing the source, there were considerable tweaks made by the Mageia development team - so much so that they were bundled together inside the SRPM. Once I was able to dig into that tarball, I found where to enable CONFIG_KERNEL_GZIP and disable CONFIG_KERNEL_XZ in the configuration.

Now it was a matter of getting a system to build the kernel on. Initially, I tried to do it on my local seed Mageia VM, but the 10GB disk was too small to hold all the compiled kernel sources. So, I launched an instance using the EC2 console of the freshly-uploaded Mageia2 AMI. This is where I ran into the limitation of the first revision of the AMI I created - 2GB was insufficient to install all the compiler dependencies needed for creating a kernel package - but 8GB was ok.

Finally, I launched an m2.xlarge instance via the EC2 console with an 8GB root disk on instance-store to do the compilation. I wanted an instance with at least 2 cores to speed up the compile and sufficient additional space on ephemeral disk (/dev/xvdb) that could hold the compiled kernel sources. It still took a considerable amount of time - approximately 2 hours to compile the RPM. When I have to do this again in the future, I might consider a high I/O instance to reduce the time spent waiting for the compile. Either way, the cost is negligible if you remember to shut it down after you're done - the m2.xlarge was around $1.35 for 3 hours and a hi1.4xlarge would be $3.10 for one hour. For those who follow this blog, you should recognize the build script.

EDIT: on 1/18/13, I recompiled the kernel using an hi1.4xlarge and the final timing from "time ./do-build.sh" was:

real    51m5.356s
user    173m1.470s
sys     32m44.040s

Of course, it took a little longer than that to install all the packages needed for building, but it can be done in less than 2 hrs.


Here are the steps for compiling the Mageia2 kernel:

#
# prep stuff done as root
#
sudo bash -o vi
# mount the ephemeral storage
mkfs -t ext4 /dev/xvdb
mkdir /media/extra
mount /dev/xvdb /media/extra
# create some swap
dd if=/dev/zero of=/media/extra/swapfile00 bs=1024 count=4194304
mkswap /media/extra/swapfile00
swapon /media/extra/swapfile00
# setup space for kernel building
mkdir /media/extra/kernel
chown $USER:$USER /media/extra/kernel
exit

#
# build stuff done as normal user
#
# prep for kernel building
cd $HOME
ln -s /media/extra/kernel
cd kernel/
# bring down the source
curl -O http://mirrors.kernel.org/mageia/distrib/2/SRPMS/core/updates/kernel-3.3.8-2.mga2.src.rpm
mkdir SOURCES
cd SOURCES
# extract the source
rpm2cpio ../kernel-3.3.8-2.mga2.src.rpm | cpio -i
# make a working copy of the .spec file
cp -p kernel.spec ..
# extract the mageia customizations
tar Jxf linux-3.3.8-mga2.tar.xz
cd 3.3.8-mga2/configs/
# modify the kernel config for gzip compression
cp -p x86_64.config x86_64.config.orig
vi x86_64.config
# diff of what it looks like when it's done
$ diff -u x86_64.config.orig x86_64.config
--- x86_64.config.orig  2012-07-12 08:53:47.000000000 +0000
+++ x86_64.config       2012-11-15 04:48:37.000000000 +0000
@@ -67,10 +67,10 @@
 CONFIG_HAVE_KERNEL_LZMA=y
 CONFIG_HAVE_KERNEL_XZ=y
 CONFIG_HAVE_KERNEL_LZO=y
-# CONFIG_KERNEL_GZIP is not set
+CONFIG_KERNEL_GZIP=y
 # CONFIG_KERNEL_BZIP2 is not set
 # CONFIG_KERNEL_LZMA is not set
-CONFIG_KERNEL_XZ=y
+# CONFIG_KERNEL_XZ is not set
 # CONFIG_KERNEL_LZO is not set
 CONFIG_DEFAULT_HOSTNAME="(none)"
 CONFIG_SWAP=y

# rebuild the mageia customizations
cd ../..
mv linux-3.3.8-mga2.tar.xz linux-3.3.8-mga2.tar.xz.orig
tar Jcf linux-3.3.8-mga2.tar.xz 3.3.8-mga2

# install builder dependencies
sudo urpmi easyrpmbuilder
sudo urpmi elfutils-devel zlib-devel binutils-devel newt-devel python-devel pciutils-devel asciidoc xmlto docbook-style-xsl

# setup the build script
cat do-build.sh
#!/bin/sh -x
rm -rf BUILD BUILDROOT RPMS SRPMS tmp || true
mkdir -p BUILD BUILDROOT RPMS SRPMS tmp

OPTS=""
OPTS="$OPTS --with=server"
OPTS="$OPTS --without=desktop"
OPTS="$OPTS --without=desktop586"
OPTS="$OPTS --without=netbook"
rpmbuild $OPTS -bb --define="_topdir $PWD" --define="_tmppath $PWD/tmp" kernel.spec 2>&1 | tee kernel-build.txt

# do the build
time ./do-build.sh

# save the rpm
scp -p RPMS/x86_64/kernel-server-3.3.8-2.mga2-1-1.mga2.x86_64.rpm $REMOTE_SERVER:

2012-12-05

Mageia2 on EC2: Boarding procedures

This is my second post on getting Mageia2 running on Amazon Web Services' Elastic Compute Cloud. See my first post in the series for an overview.

The first step to creating a Mageia2 install on EC2 is to have a local Mageia2 system as your seed setup. Why? Because you must use urpmi, the Mageia package installer. It is equivalent to apt-get in Ubuntu or yum in CentOS/RHEL/Amazon Linux. Yes, you can use rpm to install individual packages (and we will) but urpmi is what talks to the media sets (e.g. repositories) and makes sure you have all your dependencies installed. Besides the man page and the urpmi page on the Mageia wiki, I found a good quick reference guide that helped.

I'll not cover the setup of your seed Mageia2 system here - the installation is was breeze. I used VirtualBox under Windows 7 to install from the Dual-arch ISO CD, but you could likely use any old PC you have laying around and install however you like - USB key, Live CD, whatever.

Also, I'm not doing to document getting the EC2 command line tools working on your seed system. Installing java was simple ("urpmi java" I believe) and getting the API tools and AMI tools installed and configured is well documented by Amazon and plenty of others.

Once you have a functioning Mageia2 system and working EC2 AMI and API tools, then we're ready to begin.

The initial steps we'll be following are a mix between the Mageia chroot install and the official documentation on how to create an EC2 instance-store backed AMI. Another major factor was choosing a kernel. Now any good distribution ships with it's own kernel, and Mageia is no different. And of course you can use your own kernel in EC2. The most efficient way to do this is to use the PV-GRUB AKI provided by AWS to load the kernel that is present on your instance's disk, which is what we'll do.

For the most part, all of this went well after some trial-and-error. However I did run across a few issues:
  1. Make sure you create a big enough loopback device. I started with 2GB and while it was enough for the base install, it wasn't enough once I started adding other packages later. My docs below use 8GB. The maximum is 10GB.
  2. Make sure you choose the right PV-GRUB AKI (more on kernels in a moment)
  3. Use a gzip compressed kernel, not a xz compressed kernel
  4. Choose the right mirror for urpmi.addmedia (distro.ibiblio.org was extra slow for me - mirrors.kernel.org was much faster)
  5. Make sure you install the critical packages. Without dhcp-client, you won't get your IP address and without an IP address, you're sunk. Same goes for sshd and sudo.
  6. At this stage, I didn't pay attention to the ssh key pairs built into the EC2 provisioning system. I baked a new public key of the "mageia" user into the install.
When choosing a PV-GRUB AKI, the AWS documentation explains:
You must choose an AKI with "hd0" in the name if you want a raw or unpartitioned disk image (most images). Choose an AKI with "hd00" in the name if you want an image that has a partition table.
Since I am doing a direct mke2fs of the loopback image, it doesn't have a partition table. However, I was using the wrong PV-GRUB AKI, the one for loopbacks with partition tables, resulting in nothing working with error messages that were confusing. I'm sure you could fdisk your loopback image and create partitions if you want, but I didn't see it as necessary as I often use the other ephemeral disks for swap, etc. So, I had to use a "hd0" PV-GRUB to get it to work. 

After I got PV-GRUB set the system still wouldn't load. The error message from the system console was:

ERROR Invalid kernel: xc_dom_probe_bzimage_kernel: unknown compression format

In chatting with the very helpful "tmb" from the #mageia IRC channel on freenode, I was able to overcome this issue. The problem is that all Mageia kernels are xz compressed by default which is supported just fine by regular grub. However, the PV-GRUB AKI I was using didn't support xz compressed kernels. tmb provided me with a gzip compressed kernel to bootstrap by first EC2 instance running Mageia2. Thanks again tmb - without your help, I wouldn't have gotten this working!

After the kernel loaded, it was just straightforward trial and error troubleshooting till I was able to login. Here's the sanitized steps I used to get my first Mageia2 instance-store backed EC2 AMI uploaded:

# preparation
export PATH=$PATH:/sbin:/usr/sbin
# create a working directory
mkdir $HOME/ec2
# create ssh public key
ssh-keygen -t rsa -f $HOME/ec2/mageia -C "mageia@ec2" -P ""
# setup the image
dd if=/dev/zero of=mageia2-instance-store-v1.img bs=1M count=8192
# format it
mke2fs -F -j $HOME/ec2/mageia2-instance-store-v1.img

# everything forward needs to be done as root 
sudo bash -o vi

# mount the image for chroot
export MAGEIA_PUB_KEY=$HOME/ec2/mageia.pub
export CHRDIR=$HOME/ec2/loop
# mount the chroot location
mount -o loop $HOME/ec2/mageia2-instance-store-v1.img $CHRDIR

# create the minimum devices
mkdir $CHRDIR/dev
/sbin/makedev $CHRDIR/dev console
/sbin/makedev $CHRDIR/dev null
/sbin/makedev $CHRDIR/dev zero

# setup the minimum filesystems
mkdir $CHRDIR/etc
cat > $CHRDIR/etc/fstab << EOF
/dev/xvda1 /         ext3    defaults        1 1
none       /dev/pts  devpts  gid=5,mode=620  0 0
none       /dev/shm  tmpfs   defaults        0 0
none       /proc     proc    defaults        0 0
none       /sys      sysfs   defaults        0 0
EOF

# add required /proc filesystem
mkdir $CHRDIR/proc
mount -t proc none $CHRDIR/proc

# choose the best/fastest mirror
GET http://mirrors.mageia.org/api/mageia.2.x86_64.list | grep country=US
# setup the urpmi media locations in the chroot
urpmi.addmedia --distrib --urpmi-root $CHRDIR http://mirrors.kernel.org/mageia/distrib/2/x86_64
# install the minimum packages
urpmi --auto --urpmi-root $CHRDIR kernel-server basesystem urpmi locales-en sshd sudo dhcp-client

# MASSIVE HACK TIME
#
# kernel from tmb:
# http://tmb.mine.nu/Mageia/2/ec2/
#
mkdir tmb
pushd tmb
curl -O http://tmb.mine.nu.nyud.net/Mageia/2/ec2/kernel-server-3.4.18-1.mga2-1-1.mga2.x86_64.rpm
curl -O http://tmb.mine.nu.nyud.net/Mageia/2/ec2/kmod-7-7.mga2.x86_64.rpm
popd
# install custom kernel
rpm --root=$CHRDIR -Uhv tmb/*.rpm

# insure the new ramdisk is created properly
chroot $CHRDIR
cd /boot
mkinitrd initrd-3.4.18-server-1.mga2.img.a 3.4.18-server-1.mga2
exit

# set the kernel to load on boot
cat > $CHRDIR/boot/grub/menu.lst << EOF
default=0
timeout=0
title linux
  root (hd0)
  kernel /boot/vmlinuz-server ro root=/dev/xvda1 console=hvc0 BOOT_IMAGE=linux-nonfb
  initrd /boot/initrd-server.img
EOF

# configure the chroot network for ec2
cat > $CHRDIR/etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
IPV6INIT=no
EOF
cat > $CHRDIR/etc/sysconfig/network << EOF
NETWORKING=yes
CRDA_DOMAIN=US
EOF

# configure ssh
test -f $CHRDIR/etc/ssh/sshd_config.orig || cp -p $CHRDIR/etc/ssh/sshd_config $CHRDIR/etc/ssh/sshd_config.orig
cat $CHRDIR/etc/ssh/sshd_config.orig |
    sed -e 's/^#UseDNS yes/UseDNS no/g' |
    sed -e 's/^PermitRootLogin no/PermitRootLogin without-password/g' > $CHRDIR/etc/ssh/sshd_config
# setup mageia account
chroot $CHRDIR /usr/sbin/useradd --create-home --home /home/mageia --shell /bin/bash mageia
mkdir --mode=0700 $CHRDIR/home/mageia/.ssh
(umask 0077; touch $CHRDIR/home/mageia/.ssh/authorized_keys)
cat $MAGEIA_PUB_KEY >> $CHRDIR/home/mageia/.ssh/authorized_keys
echo "set -o vi" >> $CHRDIR/home/mageia/.bashrc
chown -Rh 500:500 $CHRDIR/home/mageia/.ssh
(umask 0227; echo "mageia ALL=(ALL) NOPASSWD:ALL" > $CHRDIR/etc/sudoers.d/mageia)

# dismount the chroot
umount $CHRDIR/proc
umount -d $CHRDIR

# setup for EC2
export EC2_ID=[aws account id #]
export EC2_PRIVATE_KEY=[location of private key]
export EC2_CERT=[location of signing cert]
export EC2_ACCESS=[iam access key]
export EC2_SECRET=[iam secret key]

BUCKETNAME="$EC2_ID-mageia2-instance-store-v1"
# create S3 bucket
# you can use the AWS Console instead of s3cmd, if you like
s3cmd mb s3://$BUCKETNAME
# where to put the bundle parts
pushd $HOME/ec2
mkdir mageia2-instance-store-v1
# create the AMI bundle
# AKI for pv-grub-hd0_1.03-x86_64.gz partitionless PV-GRUB in US-East-1
AKIID="aki-88aa75e1"
ec2-bundle-image -i mageia2-instance-store-v1.img -d mageia2-instance-store-v1 -r x86_64 --kernel $AKIID -k $EC2_PRIVATE_KEY -c $EC2_CERT -u $EC2_ID
# put it on S3
ec2-upload-bundle -b $BUCKETNAME -m mageia2-instance-store-v1/mageia2-instance-store-v1.img.manifest.xml -a $EC2_ACCESS -s $EC2_SECRET
# register it
ec2-register $BUCKETNAME/mageia2-instance-store-v1.img.manifest.xml -n mageia2-instance-store-v1
popd

2012-11-27

Mageia2 on EC2: Flying in a different direction

At $WORK, we make the claim that as a client's systems oversight service, we are "distribution agnostic" - meaning we'll help you out regardless of what Linux distribution you're running. Most of the time, we work with Ubuntu, RHEL, CentOS or Amazon Linux. However, a client recently decided on running Mageia2 GNU/Linux on Amazon Web Services' Elastic Compute Cloud, so I had to pick up the challenge.

Now as far as distributions go, it seems to me that Mageia is "ok" - a fork of Mandriva that has a nice community developed around it. Unfortunately, what hasn't developed in any interest in running it on EC2. There seems to be a few VPS providers that support it, but not any of the popular ones that I know of. Still, AWS EC2 has the capability of running your own Linux distribution and even your own kernel so I pushed up my sleeves and dug into getting it going. As there's a lot to this configuration, I will break this down over a few posts. Here is an outline:
  1. Boarding procedures - performing a chroot install on a local Mageia system 
  2. Stormy weather - recompiling the Linux kernel to get Mageia running on EC2
  3. Cruising altitude - performing another chroot install for an EBS backed instance
I'll be posting sanitized code samples and links to the key documents that explain the steps.

EDIT 2/7/2013: I've put the code samples up on github.

2012-10-26

Enabling Markdown on your apache webserver

At $WORK, we're toying with moving all documentation with Markdown and git. To do that I needed to be able to render it locally to preview before pushing to GitHubBitbucket or another yet-to-be-determined repository. This setup was rather quick, easy and painless. Here's the steps:

1. Install Text::Markdown as your converter. The perl-Text-Markdown RPM was in the repoforge repository:

sudo yum install perl-Text-Markdown

2. The package comes with a script that does all the heavy lifting. It just needs to be slightly tweaked to make it run as a CGI.
$ cp -p /usr/bin/Markdown.pl $CGIBIN/Markdown.cgi
$ cd $CGIBIN
$ vi Markdown.cgi
$ diff -U0 /usr/bin/Markdown.pl Markdown.cgi
--- /usr/bin/Markdown.pl        2011-02-10 11:50:20.000000000 -0500
+++ Markdown.cgi        2012-10-26 12:08:53.000000000 -0400
@@ -147 +147,2 @@
-print main(@ARGV) unless caller();
+print "Content-type: text/html\n\n";
+print main($ENV{PATH_TRANSLATED}) unless caller();
3. Configure an apache handler to hand all Markdown-formatted files to your action.

Action markdown /cgi-bin/Markdown.cgi
AddHandler markdown .md

4. Gracefully restart apache

sudo apachectl graceful

Tips: This is for private previewing. Don't put this on your public webserver. If you do, you're asking for the wrath of the ancient CGI deities to descend upon your server and sunder it to ashes. If I was doing this for a public facing site, I'd use something that caches the opcode for the script (most likely in another programming language as well), caches the rendered page in memcache, etc. etc. You've been warned.

UPDATE: I've updated my renderer for Markdown.

2012-08-23

Building RPMs cleanly

I recently found this script recently to build a solr rpm and I love how it simply solves so many problems with RPM packaging with a few defines. Here's my slightly modified version for building an apache package, which leaves the SOURCES directory untouched and keeps a log of the build so you can go back and review it later:

#!/bin/sh -x
rm -rf BUILD RPMS SRPMS tmp || true
mkdir -p BUILD RPMS SRPMS tmp

rpmbuild -bb --define="_topdir $PWD" --define="_tmppath $PWD/tmp" apache.spec 2>&1 | tee apache-build.txt

2012-07-11

Log housekeeping with python

This week I was able to finally finish some python code I've been writing for $WORK - a script to rotate webserver logs directly to S3. This task was similar to something I'd done a long, LONG time ago (14 years since the first rev!) in a programming language far, far away. I had a much bigger chip on my shoulder then -  site analytics really isn't done with log parsing anymore, so I skipped the whole test for open filehandles and email notifications. Anywhere, here it is - enjoy!

https://github.com/dialt0ne/rotate-to-s3

Kudos to Justin for critiquing my python.

2012-06-29

EBS snapshots and LVM2

I've been meaning to try this for a while to see how it goes - use LVM2 to take an "instantaneous" snapshot of an EBS volume and then let AWS take it's time. I found LVM wasn't as quick as I'd like. Also, I have performance tested this either, so I don't know how bad the latency will be. Either way, I think it's an easy way to get a consistent backup:


# prep
export MYAZ="us-east-1a"
export MYINST="i-XXXXXXXX"
# create 1st EBS volume and attach
ec2-create-volume --size 2 --availability-zone $MYAZ
export VOL0="vol-XXXXXXXX"
ec2-attach-volume $VOL0 --instance $MYINST --device /dev/sdf

# create LVM partition 1
fdisk /dev/sdf
# add to LVM
pvcreate /dev/sdf1
# create a volume group
vgcreate vol0 /dev/sdf1
# create a logical volume
lvcreate -l80%FREE -n test vol0
# format it
mke2fs -j -m0 /dev/vol0/test
# mount it
mkdir -p /mnt/vol0/test
mount /dev/vol0/test /mnt/vol0/test

# lock data consistently

# create LVM snapshot
lvcreate -L300M -s -n test2 /dev/vol0/test

# unlock data consistently

# create EBS snapshot
ec2-create-snapshot $VOL0
export VOL0_SNAP="snap-XXXXXXXX"

# remove LVM snapshot
lvremove vol0/test2

# create 2nd EBS volume from snapshot and attach
ec2-create-volume --snapshot $VOL0_SNAP --availability-zone $MYAZ
export VOL1="vol-YYYYYYYY"
ec2-attach-volume $VOL1 --instance $MYINST --device /dev/sdf

# import snapshot as new volume group
vgimportclone -n vol1 /dev/sdg1
# activate new volume group
vgchange -a y vol1
# mount it
mkdir -p /mnt/vol1/test2
mount /dev/vol1/test2 /mnt/vol1/test2

EDIT 2013.02.10: "lock data consistently" - I highly recommend "fsfreeze" which is built into most Linux distributions nowadays.

Ratings and Recommendations by outbrain