2013-10-07

Enabling Markdown on your apache webserver - redux

In a previous post I enabled previewing of Markdown formatted documents using the Text::Markdown perl module. However simple that module was to implement, it only implemented daringfireball markdown. Things at $WORK have ramped up the adoption of Markdown and the atrophied standard is not enough. So, I've had to find another renderer.

Started by looking at how GitHub renders markdown and found they use the Redcarpet gem. The examples in the documentation combined with the ruby-redcarpet Ubuntu package make this a pretty simple exercise in upgrading my Markdown renderer:
  1. Install the Redcarpet gem:

  2. apt-get install ruby-redcarpet

  3. Replace ye olde Markdown.cgi with new hotness Markdown.cgi

  4. #!/usr/bin/ruby
    require 'redcarpet'
    print "Content-type: text/html\n\n"
    markdown = Redcarpet::Markdown.new(
        Redcarpet::Render::HTML,
        :autolink => true,
        :fenced_code_blocks => true,
    )
    puts markdown.render(File.read(ENV['PATH_TRANSLATED']))

  5. Start your browser, surf the docs!
UPDATE: I've updated my renderer for Markdown.

2013-07-22

Building a perl package for Amazon Linux, CentOS or RHEL

A previous post detailed on how to build a .deb file for a perl module on Ubuntu. However, I needed the updated module as a .rpm on an Amazon Linux system, so I created the procedure for that OS as well. It was a lot easier than I thought it would be.

# install general dependencies of Net::Amazon::EC2
sudo yum --enablerepo=epel install \
    perl-Net-Amazon-EC2 perl-File-Slurp perl-DBI perl-DBD-MySQL \
    perl-Net-SSLeay perl-IO-Socket-SSL perl-Time-HiRes perl-Params-Validate \
    perl-Date-Manip perl-DateTime perl-DateTime-Format-ISO8601 \
    ca-certificates

# install stuff required for the build
sudo yum --enablerepo=epel install cpanspec rpm-build.x86_64
sudo yum --enablerepo=epel install perl-Test-Exception perl-CPAN

# generate .spec file
PACKAGER="John Smith <jsmith@example.com>"
cpanspec --packager "$PACKAGER" -v Net::Amazon::EC2

# get the source
mkdir -p rpmbuild/SOURCES
wget -Orpmbuild/SOURCES/Net-Amazon-EC2-0.23.tar.gz \
    http://search.cpan.org/CPAN/authors/id/M/MA/MALLEN/Net-Amazon-EC2-0.23.tar.gz

# don't set these or tests will fail
unset AWS_ACCESS_KEY_ID
unset SECRET_ACCESS_KEY

# do the build
rpmbuild -bb perl-Net-Amazon-EC2.spec

2013-07-09

Piping STDOUT to one command but STDERR to a different command

Found this awesome stackoverflow answer and had to write it up as a note to myself:

./foobar.pl > >( logger -t stdout ) 2> >( logger -t stderr )

Specifically, I hope to use this to replicate all EBS snapshots taken on an instance, e.g.:

ec2-consistent-snapshot > >( ec2-replicate-snapshots ) 2> >( logger -t $PROGNAME )

2013-07-08

Building a perl package for Ubuntu

I recently had a need to update an Ubuntu perl module for a client at $WORK. Specifically I needed to update the Net::Amazon::EC2 module to version 0.23 to support tagging of AWS resources. Looking at CPAN, the library went un-maintained for close to 3 years before a new author picked it up. Things are looking up though as it seems this fall's release of Ubuntu will likely have the new package.

Still, this project couldn't wait till then. Thankfully, I found a good article on how to roll your own Debian packages for perl modules. There were only 2 hangups. First, the "make test" step fails because it can't do the authentication for AWS. I've disabled the "make test" step in my procedure by exporting the variable DEB_BUILD_OPTIONS="nocheck". After that the module package builds fine, but there's no dependencies set so when installing it, it can't be used without a few rounds of dependency hell. To solve this, I just borrowed the dependency list from the current 0.14 version of the package. The borrowed dependencies were "perl, libmoose-perl, libparams-validate-perl, liburi-perl, libwww-perl, libxml-simple-perl".

Note, this is not a well-tuned setup. There's lots of peculiarities that the dh-make-perl and debuild process wants that can be glossed over. For example - if you don't have a pgp key for your $DEBEMAIL address in you private keychain, the .deb package will still build, but it won't be signed. This may or may not be a problem for you, YMMV.

Here's the steps I took for building the .deb:

#
# install some packages
#
sudo apt-get update
sudo apt-get install dh-make-perl libmodule-build-perl debhelper devscripts
sudo apt-file update

#
# do the build
#

# note: disable "make test" so you don't need valid AWS keys
# (I couldn't make it work because even though I set the end variables,
# I think dh-make-build cleans the environment before running make)
export DEB_BUILD_OPTIONS="nocheck"

export DEBFULLNAME="John Smith"
git config --global user.name $DEBFULLNAME
export DEBEMAIL="jsmith@example.com"
git config --global user.email $DEBEMAIL

wget http://search.cpan.org/CPAN/authors/id/M/MA/MALLEN/Net-Amazon-EC2-0.23.tar.gz
mkdir builder
cd builder
# debian is particular how original tar files are named
ln -s ../Net-Amazon-EC2-0.23.tar.gz libnet-amazon-ec2-perl_0.23.orig.tar.gz
tar -pzxvf libnet-amazon-ec2-perl_0.23.orig.tar.gz
cd Net-Amazon-EC2-0.23/
dh-make-perl --depends "perl, libmoose-perl, libparams-validate-perl, liburi-perl, libwww-perl, libxml-simple-perl" .
debuild
cd ..
# viola, a .deb package
dpkg --info libnet-amazon-ec2-perl_0.23-1_all.deb

2013-03-05

Forcing a service to stop during Chef execution

My latest project at work has been to get a Chef installation going for a client who is migrating a site from a traditional datacenter to Amazon Web Services. And part of the unique snowflake of a deploy process for this application is that it must be restarted on every deploy because the config files are included in the deploy - apache's httpd.conf, tomcat's server.xml, etc. etc. Most importantly is that if you don't stop the service before the first deploy, it will most likely not stop at all because it makes significant changes to the configuration (pid file locations, etc.). NOTE: I would NOT have designed the deploy process this way, but I'm stuck with it for  now. So, in the meantime, this is how I deal with it:

1) Right after the service definition in the default recipe (e.g. apache2::default), create a semaphore file that forces the service to stop if this file doesn't exist (it shouldn't):

file "/tmp/gather-apache2-semaphore" do
    mode 00644
    owner "root"
    group "root"
    action :create
    content "must stop apache"
    notifies :stop, resources(:service => "apache2"), :immediately
end

2) Make sure your application deploy recipe notifies a restart of the service (e.g. apache2::example_app_config):

ruby_block "copy_httpd.conf" do
    block do
         FileUtils.copy_file("#{deploy_directory}/example_app/build_out/dist/conf/httpd.conf","/etc/apache2/httpd.conf")
    end
    notifies :restart, resources(:service => "apache2")
end

and/or if you are calling this from another recipe (e.g. example_app::deploy):

include_recipe "apache2"

bash "copy_example_app_htdocs.tar" do
    user "root"
    cwd "#{deploy_directory}/example_app/build_out/dist"
    code <<-EOH
        tar --extract --file=example_app_htdocs.tar --directory /etc/apache2/htdocs
    EOH
    only_if { ::File.exists?("/etc/apache2/htdocs") }
    notifies :restart, resources(:service => "apache2")
end

3) Finally, after all the key recipes have been run to deploy your unique snowflake, the service will have been restarted and you can clean-up your semaphore that will force the service to be restarted next time you deploy.

file "/tmp/gather-apache2-semaphore" do
    action :delete
end

Now the service will be guaranteed to be stopped while your application deploying.

2013-02-28

Importing a SSL/TLS Wildcard Certificate and Private Key from your webserver onto your Cisco ASA 5500 series firewall

Whoops! The self-signed certificate on the corporate Cisco ASA 5520 firewall expired a month ago and now it needs to be updated. However, we have a legitimate wildcard certificate issued from GeoTrust, so I figured out how to re-use that cert on the ASA by converting it with openssl into a format that it likes. Here are the steps:

1. convert all certs and keys to PEM format

    mkdir asa
    openssl x509 -in example_com.crt \
        -out asa/example_com.crt -outform pem
    openssl x509 -in geotrust-intermediate-ca.crt \
        -out asa/geotrust-intermediate-ca.crt -outform pem
    openssl rsa -in example_com.key \
        -out asa/example_com.key -outform pem

2. now bundle them into PKCS12 format

    cd asa
    openssl pkcs12 -export -in example_com.crt -inkey example_com.key \
        -certfile geotrust-intermediate-ca.crt -out example_com.p12
    # remember the password when prompted to encrypt it "Enter Export Password:"

3. now base64 encode it for the ASA

    ( echo -----BEGIN PKCS12-----;
      openssl base64 -in example_com.p12;
      echo -----END PKCS12-----; ) > example_com.pkcs12

4. Import the cert on the ASA via copy/paste from example_com.pkcs12

    fw1# conf t
    fw1(config)# crypto ca import example_com-trustpoint pkcs12 {password}

    Enter the base 64 encoded pkcs12.
    End with the word "quit" on a line by itself:
    -----BEGIN PKCS12-----
    { snip }
    -----END PKCS12-----
    quit
    INFO: Import PKCS12 operation completed successfully
    fw1(config)# exit
    fw1# wr me
    fw1# show crypto ca certificates

4. Enable the trustpoint on the outside interface

    fw1# conf t
    fw1(config)# ssl trust-point example_com-trustpoint outside
    fw1(config)# exit
    fw1# wr me
    fw1# show ssl

5. Bounce the VPN

    fw1# conf t
    fw1(config)# webvpn
    fw1(config-webvpn)# no enable outside
    WARNING: Disabling webvpn removes proxy-bypass settings.
    Do not overwrite the configuration file if you want to keep existing proxy-bypass commands.
    INFO: WebVPN and DTLS are disabled on 'outside'.
    fw1(config-webvpn)# enable outside   
    INFO: WebVPN and DTLS are enabled on 'outside'.
    fw1(config)# exit
    fw1# wr me

Here are some of the helpful pages I found to get the solution above:

http://www.cisco.com/en/US/products/ps6120/prod_configuration_examples_list.html#anchor10 http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808b3cff.shtml https://supportforums.cisco.com/docs/DOC-13553 http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808efbd2.shtml http://www.cisco.com/en/US/docs/security/asa/asa80/release/notes/asarn80.html#wp242704 http://www.sslshopper.com/article-most-common-openssl-commands.html http://support.citrix.com/article/CTX106630 http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00809fcf91.shtml

2013-02-06

Mageia2 on EC2: Cruising Altitude

This is my fourth post on getting Mageia2 running on Amazon Web Services' Elastic Compute Cloud. See my first post in the series for an overview.

In the last post, I addressed the problem of having only a test kernel by tweaking the Mageia kernel SRPM and creating a gzipped kernel that can be used with the version of PV-GRUB supplied by Amazon. Now I'll walk through the steps of building an EBS backend instance instead of an instance-store backed instance.

You need a working Mageia setup on an instance-store backed instance before you can create the EBS backed one. Just launch the AMI created in a previous step and then attach a 32GB EBS volume to it. Using the EC2 API tools, you attach the volume like this:

SIZE=32
TARGETAZ=us-east-1a
INSTID=i-09abcdef

CMD=($(ec2-create-volume --size $SIZE --availability-zone $TARGETAZ --type standard))
VOLID=${CMD[1]}
ec2-attach-volume $VOLID --instance $INSTID --device /dev/sdg


You will also need some other components:
  1. "kernel-server" RPM created in a last post.
  2. A copy of ec2-get-ssh.sh for the mageia user
The second component is so you don't have to embed passwords in your AMI, but instead uses ssh public keys that are imported to (or generated by) AWS.

Another difference is that we add the kernel to the skip.list for upgrades, as we don't want to get a non-gzipped kernel. So, here's the steps for setting it up:

mkdir $HOME/ec2

# everything forward needs to be done as root 
sudo bash -o vi
cd $HOME/ec2
export PATH=$PATH:/sbin:/usr/sbin

# setup the filesystem
/sbin/mkfs -t ext4 /dev/xvdg

# mount the image for chroot
export CHRDIR=$HOME/ec2/loop
mount /dev/xvdg $CHRDIR

# create the minimum devices
mkdir $CHRDIR/dev
/sbin/makedev $CHRDIR/dev console
/sbin/makedev $CHRDIR/dev null
/sbin/makedev $CHRDIR/dev zero

# setup the minimum filesystems
mkdir $CHRDIR/etc
cat > $CHRDIR/etc/fstab << EOF
/dev/xvda1 /         ext3    defaults        1 1
none       /dev/pts  devpts  gid=5,mode=620  0 0
none       /dev/shm  tmpfs   defaults        0 0
none       /proc     proc    defaults        0 0
none       /sys      sysfs   defaults        0 0
EOF

# add required /proc filesystem
mkdir $CHRDIR/proc
mount -t proc none $CHRDIR/proc

# choose the best/fastest mirror
GET http://mirrors.mageia.org/api/mageia.2.x86_64.list | grep country=US
# setup the urpmi media locations in the chroot
urpmi.addmedia --distrib --urpmi-root $CHRDIR http://mirrors.kernel.org/mageia/distrib/2/x86_64
# install the minimum packages
urpmi --auto --urpmi-root $CHRDIR basesystem urpmi locales-en sshd sudo dhcp-client

# MASSIVE HACK TIME
rpm --root=$CHRDIR -Uhv custom-kernel/kernel-server-3.3.8-2.mga2-1-1.mga2.x86_64.rpm

# cleanup desktop kernel
chroot $CHRDIR
urpme kernel-desktop-3.3.8-2.mga2-1-1.mga2
rm -f initrd-desktop.img  vmlinuz-desktop 
# confirm there's a good initrd
cd /boot
stat initrd-3.3.8-server-2.mga2.img
mkinitrd initrd-3.3.8-server-2.mga2.img 3.3.8-server-2.mga2
exit

# set the kernel to load on boot
cat > $CHRDIR/boot/grub/menu.lst << EOF
default=0
timeout=0
title linux
  root (hd0)
  kernel /boot/vmlinuz-server ro root=/dev/xvda1 console=hvc0 BOOT_IMAGE=linux-nonfb
  initrd /boot/initrd-server.img
EOF

# do not upgrade the kernel, until upstream fixes the xz/gz issue
test -f $CHRDIR/etc/urpmi/skip.list || cp -p $CHRDIR/etc/urpmi/skip.list $CHRDIR/etc/urpmi/skip.list.orig
cat > $CHRDIR/etc/urpmi/skip.list << EOF
# Here you can specify the packages that won't be upgraded automatically
# for example, to exclude all apache packages :
# /^apache/
/^kernel/
EOF

# configure the chroot network for ec2
cat > $CHRDIR/etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
IPV6INIT=no
EOF
cat > $CHRDIR/etc/sysconfig/network << EOF
NETWORKING=yes
CRDA_DOMAIN=US
EOF

# configure ssh
test -f $CHRDIR/etc/ssh/sshd_config.orig || cp -p $CHRDIR/etc/ssh/sshd_config $CHRDIR/etc/ssh/sshd_config.orig
cat $CHRDIR/etc/ssh/sshd_config.orig |
    sed -e 's/^#UseDNS yes/UseDNS no/g' |
    sed -e 's/^PermitRootLogin no/PermitRootLogin without-password/g' > $CHRDIR/etc/ssh/sshd_config
# create login account
chroot $CHRDIR /usr/sbin/useradd --create-home --home /home/mageia --shell /bin/bash mageia
(umask 0227; echo "mageia ALL=(ALL) NOPASSWD:ALL" > $CHRDIR/etc/sudoers.d/mageia)

# setup ssh public key
cp ec2-get-ssh $CHRDIR/etc/rc.d/init.d/ec2-get-ssh
chmod 0750 $CHRDIR/etc/rc.d/init.d/ec2-get-ssh
chown root:root $CHRDIR/etc/rc.d/init.d/ec2-get-ssh
chroot $CHRDIR /sbin/chkconfig ec2-get-ssh on

# dismount the chroot
umount $CHRDIR/proc
umount -d $CHRDIR
Now that the EBS volume is all set, it needs to be snapshotted and registered as an AMI. Here's what you do:

ec2-detach-volume $VOLID --instance $INSTIT--device /dev/sdg

# create a snapshot
CMD=($(ec2-create-snapshot --description "Mageia 2" $EBSVOL))
SNAPID=${CMD[1]}

# create AMI
AKIID="aki-88aa75e1"
ec2-register --name "Mageia 2" --description "Mageia 2" \

    --architecture x86_64 --root-device-name /dev/sda1 \
    --block-device-mapping /dev/sda1=$SNAPID --kernel $AKIID


Now you're ready to launch your EBS back Mageia2 Linux instance! Enjoy!

Ratings and Recommendations by outbrain