tag:blogger.com,1999:blog-299170232024-03-13T15:42:31.360-04:00tony's geek stuffstuff i'm working on.
current project: chef!Unknownnoreply@blogger.comBlogger77125tag:blogger.com,1999:blog-29917023.post-70457475811438712952014-04-05T10:32:00.000-04:002014-04-26T15:38:05.204-04:00Enabling Markdown on your apache webserver - update 3Last time on "Enabling Markdown on your apache webserver":
I created a perl CGI handler to render Markdown into HTML
I updated (rewrote) the CGI handler to use ruby and the redcarpet gem.
And now, the exciting conclusion....
I've updated my CGI handler yet again with a few new features and I've posted it on my GitHub account, in the docs-on-clearance repo (get 'em cheap while they're all Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-31737421727645288452014-03-27T13:02:00.001-04:002014-03-27T13:02:48.983-04:00Building Icinga on Amazon LinuxAs part of a datacenter to cloud migration for $WORK I recently needed to move our Icinga install to AWS. However, there is no current Icinga package for Amazon Linux. There exists a set of rpms when using rpmforge, but Amazon Linux is already setup for EPEL instead. The way forward? Build the rpms.
It actually wasn't very hard at all. I spun up a builder instance, built the rpms, copied them toUnknownnoreply@blogger.com2tag:blogger.com,1999:blog-29917023.post-7708230060218152002014-02-03T12:12:00.003-05:002014-02-03T12:12:46.311-05:00Automation
From the venerable xkcd:
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-36848862897963944872013-10-07T14:07:00.002-04:002014-04-06T16:43:54.999-04:00Enabling Markdown on your apache webserver - reduxIn a previous post I enabled previewing of Markdown formatted documents using the Text::Markdown perl module. However simple that module was to implement, it only implemented daringfireball markdown. Things at $WORK have ramped up the adoption of Markdown and the atrophied standard is not enough. So, I've had to find another renderer.
Started by looking at how GitHub renders Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-29917023.post-12077324485827841072013-07-22T11:53:00.000-04:002013-09-03T21:00:05.134-04:00Building a perl package for Amazon Linux, CentOS or RHELA previous post detailed on how to build a .deb file for a perl module on Ubuntu. However, I needed the updated module as a .rpm on an Amazon Linux system, so I created the procedure for that OS as well. It was a lot easier than I thought it would be.
# install general dependencies of Net::Amazon::EC2
sudo yum --enablerepo=epel install \
perl-Net-Amazon-EC2 perl-File-Slurp perl-DBI Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-53993188934977406272013-07-09T13:43:00.000-04:002013-07-09T13:43:51.549-04:00Piping STDOUT to one command but STDERR to a different commandFound this awesome stackoverflow answer and had to write it up as a note to myself:
./foobar.pl > >( logger -t stdout ) 2> >( logger -t stderr )
Specifically, I hope to use this to replicate all EBS snapshots taken on an instance, e.g.:
ec2-consistent-snapshot > >( ec2-replicate-snapshots ) 2> >( logger -t $PROGNAME )
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-70269135674170329282013-07-08T16:59:00.000-04:002013-07-08T16:59:31.471-04:00Building a perl package for UbuntuI recently had a need to update an Ubuntu perl module for a client at $WORK. Specifically I needed to update the Net::Amazon::EC2 module to version 0.23 to support tagging of AWS resources. Looking at CPAN, the library went un-maintained for close to 3 years before a new author picked it up. Things are looking up though as it seems this fall's release of Ubuntu will likely have the new package.
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-68491030488275735202013-03-05T11:23:00.001-05:002013-03-05T11:23:17.860-05:00Forcing a service to stop during Chef executionMy latest project at work has been to get a Chef installation going for a client who is migrating a site from a traditional datacenter to Amazon Web Services. And part of the unique snowflake of a deploy process for this application is that it must be restarted on every deploy because the config files are included in the deploy - apache's httpd.conf, tomcat's server.xml, etc. etc. Most Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-27768647596318757882013-02-28T16:22:00.001-05:002013-02-28T16:22:15.113-05:00Importing a SSL/TLS Wildcard Certificate and Private Key from your webserver onto your Cisco ASA 5500 series firewallWhoops! The self-signed certificate on the corporate Cisco ASA 5520 firewall expired a month ago and now it needs to be updated. However, we have a legitimate wildcard certificate issued from GeoTrust, so I figured out how to re-use that cert on the ASA by converting it with openssl into a format that it likes. Here are the steps:
1. convert all certs and keys to PEM format
mkdir asa
Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-29917023.post-4997668583225958832013-02-06T16:41:00.002-05:002013-02-06T16:41:32.735-05:00Mageia2 on EC2: Cruising AltitudeThis is my fourth post on getting Mageia2 running on Amazon Web Services' Elastic Compute Cloud. See my first post in the series for an overview.
In the last post, I addressed the problem of having only a test kernel by tweaking the Mageia kernel SRPM and creating a gzipped kernel that can be used with the version of PV-GRUB supplied by Amazon. Now I'll walk through the steps of building an EBS Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-30953730769906557862012-12-21T10:35:00.002-05:002012-12-21T10:35:44.662-05:00Vi IMprovedUbuntu tweak #2 - diediedie nano die!
$ sudo update-alternatives --config editor
There are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-43017859382800724632012-12-20T15:24:00.004-05:002012-12-20T15:25:11.201-05:00Disable dnsmasq in NetworkManagerI have recently converted my work desktop to Ubuntu 12.10. Most things were better, but I was seeing horrible DNS lag from dnsmasq. To disable it, I've done the following:
sudo vi /etc/NetworkManager/NetworkManager.conf
# comment out dns=dnsmasq
sudo restart network-manager
This will regenerate your resolv.conf and you'll see your DNS servers directly and not localhost.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-55447953444292968762012-12-11T23:20:00.000-05:002013-01-18T10:54:20.833-05:00Mageia2 on EC2: Stormy Weather
This is my third post on getting Mageia2 running on Amazon Web Services' Elastic Compute Cloud. See my first post in the series for an overview.
In my last post I described how to create and upload an AMI that allows you to run Mageia2 on EC2. There were two issues with that method:
The EC2 instances are using a one-off unverified kernel, obtained for testing purposes only.
The instances Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-73082877667925692052012-12-05T11:29:00.000-05:002012-12-05T11:33:17.853-05:00Mageia2 on EC2: Boarding proceduresThis is my second post on getting Mageia2 running on Amazon Web Services' Elastic Compute Cloud. See my first post in the series for an overview.
The first step to creating a Mageia2 install on EC2 is to have a local Mageia2 system as your seed setup. Why? Because you must use urpmi, the Mageia package installer. It is equivalent to apt-get in Ubuntu or yum in CentOS/RHEL/Amazon Linux. Yes,Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-31263821918666300222012-11-27T09:41:00.000-05:002013-02-07T22:29:30.169-05:00Mageia2 on EC2: Flying in a different directionAt $WORK, we make the claim that as a client's systems oversight service, we are "distribution agnostic" - meaning we'll help you out regardless of what Linux distribution you're running. Most of the time, we work with Ubuntu, RHEL, CentOS or Amazon Linux. However, a client recently decided on running Mageia2 GNU/Linux on Amazon Web Services' Elastic Compute Cloud, so I had to pick up the Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-21220480202182211212012-10-26T12:24:00.000-04:002014-04-06T16:43:41.325-04:00Enabling Markdown on your apache webserverAt $WORK, we're toying with moving all documentation with Markdown and git. To do that I needed to be able to render it locally to preview before pushing to GitHub, Bitbucket or another yet-to-be-determined repository. This setup was rather quick, easy and painless. Here's the steps:
1. Install Text::Markdown as your converter. The perl-Text-Markdown RPM was in the Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-29917023.post-624347904512525942012-08-23T17:52:00.001-04:002012-08-23T17:52:21.786-04:00Building RPMs cleanlyI recently found this script recently to build a solr rpm and I love how it simply solves so many problems with RPM packaging with a few defines. Here's my slightly modified version for building an apache package, which leaves the SOURCES directory untouched and keeps a log of the build so you can go back and review it later:
#!/bin/sh -x
rm -rf BUILD RPMS SRPMS tmp || true
mkdir -p Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-39497609657677880812012-07-11T12:30:00.000-04:002012-07-11T12:34:29.621-04:00Log housekeeping with pythonThis week I was able to finally finish some python code I've been writing for $WORK - a script to rotate webserver logs directly to S3. This task was similar to something I'd done a long, LONG time ago (14 years since the first rev!) in a programming language far, far away. I had a much bigger chip on my shoulder then -
site analytics really isn't done with log parsing anymore,&Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-9253065055647038172012-06-29T14:42:00.001-04:002013-02-10T22:32:21.371-05:00EBS snapshots and LVM2I've been meaning to try this for a while to see how it goes - use LVM2 to take an "instantaneous" snapshot of an EBS volume and then let AWS take it's time. I found LVM wasn't as quick as I'd like. Also, I have performance tested this either, so I don't know how bad the latency will be. Either way, I think it's an easy way to get a consistent backup:
# prep
export MYAZ="us-east-1a"
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-26862564773045848592011-12-29T17:20:00.000-05:002012-01-10T09:35:07.207-05:00QoS for Asterisk/PiaF on CentOS with Cisco hard phones & switchesNow that I'm moved into the new office for $WORK, I had to diagnose some phone issues with our new Asterisk based PBX-in-a-Flash phone system. Thankfully, the new office setup is better in a few ways:
All jacks in the office are active, with PoE
All the switches are the same model number, Cisco WS-C3560G-48PS
All the phones are the same model, Cisco SPA504G
After tweaking some SIP Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-40082808715705242752011-05-17T23:20:00.002-04:002011-05-17T23:30:28.375-04:00Rootless RootESR's UNIX Koans are always worth reading again... be enlightened!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-82561976178421463952011-04-22T19:28:00.005-04:002011-04-22T19:37:59.767-04:00rsync + FAT32 filesystemFound a useful nugget in the rsync FAQ: if your destination filesystem when using rsync is a FAT32 filesystem you need to add the --modify-window=1 option due to problems with the modified times on FAT32. A working example would be:rsync \ --progress \ --delete \ --verbose \ --archive \ --modify-window=1 \ /path/to/source/dir/ \ /path/to/fat32/dir/As always, remember to be careful about those Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-61831746357210672642011-04-14T12:04:00.004-04:002011-04-14T12:30:54.576-04:00Self-signing a certificate... quicklyI've been using SSL/TLS certs for a long, long time - I've even had to re-issue my personal CA cert after it expired after 5 years. However, every time I've issued a self signed cert for an internal site, openssl prompted me interactively for the Country, State, Locality, etc. etc. blah, blah, blah. The lack of automation was exceptionally annoying. I knew the defaults could be customized so thatUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-20000162534206327642011-04-08T13:44:00.004-04:002011-04-08T17:19:43.629-04:00Disabling TRACE and TRACK methodsAfter reading a blog post about how to disable TRACE and TRACK for compliance, I've taken an extra step - limit HTTP requests to only "the big three": RewriteEngine On RewriteCond %{REQUEST_METHOD} !^(GET|HEAD|POST) RewriteRule .* - [F]It's possible you might want to add "OPTIONS" to that list or "DELETE|PUT" to be RESTful, but as with most implementations, YMMV.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-29917023.post-32826104368186441932011-02-12T07:26:00.000-05:002011-02-12T07:27:21.995-05:00python!After a break, I've decided to pick up a new project - learn python. I have a specific work goal in mind - create a small application to create, manage, remove, map, etc. CloudFront distributions that use a custom origin server, i.e. not S3, across multiple AWS accounts. It seems that boto is way to go. It has an uphill battle - the AWS provided SDKs are really quite easy to use and are well Unknownnoreply@blogger.com0