QoS for Asterisk/PiaF on CentOS with Cisco hard phones & switches

Now that I'm moved into the new office for $WORK, I had to diagnose some phone issues with our new Asterisk based PBX-in-a-Flash phone system. Thankfully, the new office setup is better in a few ways:
  1. All jacks in the office are active, with PoE
  2. All the switches are the same model number, Cisco WS-C3560G-48PS
  3. All the phones are the same model, Cisco SPA504G
After tweaking some SIP settings in PiaF, I found myself looking into QoS. The old office did not have it configured, but I wanted to give it a second look.

Thankfully, Cisco has a QoS feature for those without an CCIE certification - Auto QoS. To enable QoS for our network here, the process was as follows:

! optional: enable debug to watch command macros execute
debug auto qos
! configure the switch
conf t
  ! cdp must be running
  cdp run
  ! first configure all end-user ports
  int range gi0/1 - 42
    cdp enable
    auto qos voip cisco-phone
  ! next configure the PBX port and uplink to other switch
  int range gi0/46, gi0/52
    cdp enable
    auto qos voip trust
! disable debug
no debug auto qos

And AFAICT that's that. Repeat for all switches and redundant ports for the PBX. For more details, there's a configuration example and additional documentation on Cisco's site.



rsync + FAT32 filesystem

Found a useful nugget in the rsync FAQ: if your destination filesystem when using rsync is a FAT32 filesystem you need to add the --modify-window=1 option due to problems with the modified times on FAT32. A working example would be:
rsync \
--progress \
--delete \
--verbose \
--archive \
--modify-window=1 \
/path/to/source/dir/ \
As always, remember to be careful about those trailing slashes!


Self-signing a certificate... quickly

I've been using SSL/TLS certs for a long, long time - I've even had to re-issue my personal CA cert after it expired after 5 years. However, every time I've issued a self signed cert for an internal site, openssl prompted me interactively for the Country, State, Locality, etc. etc. blah, blah, blah. The lack of automation was exceptionally annoying. I knew the defaults could be customized so that only the Common Name would have to be entered, but that wasn't enough. The openssl req manual page has a non-working example of a config file that shouldn't prompt (Sample configuration containing all field values) but it doesn't work. After spending considerable time trying to craft a custom, template openssl.cnf file today, I finally found a blog post that mentions the -subj argument that completes the certificate request without any prompting. The only prompting now done is for the rsa command if you're encrypting your keyfile. And of course, this can be automated with the -passin arg, if needed. Here is a full example:
# FQDN of SSL/TLS site

# preflight
ST="New York"
L="New York"
O="Example.com Inc."
OU="Systems Team"

# create a private key
openssl genrsa -out ${CN}.key 2048
# create a certificate request
openssl req \
-new \
-subj "/C=$C/ST=$ST/L=$L/O=$O/OU=$OU/CN=$CN/emailAddress=$emailAddress" \
-key ${CN}.key \
-out ${CN}.csr
# create cert
openssl x509 -req -days 3650 -in ${CN}.csr -signkey ${CN}.key -out ${CN}.crt

# optional - encrypt key
# move key
mv ${CN}.key ${CN}.key.plain
# encrypt key
# (add '-passin pass:password' or '-passin file:pathname' for no prompting)
# see openssl(1) manpage
openssl rsa -des3 -in ${CN}.key.plain -out ${CN}.key.crypt
# rename key
mv ${CN}.key.crypt ${CN}.key
# clean up
rm ${CN}.key.plain


Disabling TRACE and TRACK methods

After reading a blog post about how to disable TRACE and TRACK for compliance, I've taken an extra step - limit HTTP requests to only "the big three":
        RewriteEngine On
RewriteRule .* - [F]
It's possible you might want to add "OPTIONS" to that list or "DELETE|PUT" to be RESTful, but as with most implementations, YMMV.



After a break, I've decided to pick up a new project - learn python. I have a specific work goal in mind - create a small application to create, manage, remove, map, etc. CloudFront distributions that use a custom origin server, i.e. not S3, across multiple AWS accounts. It seems that boto is way to go. It has an uphill battle - the AWS provided SDKs are really quite easy to use and are well documented.

EDIT: Gah! Talking about leaping before looking! I pull up the boto API reference page on cloudfront and I see the following:
This module is not well tested. Paging of distributions is not yet supported. CNAME support is completely untested. Use with caution. Feedback and bug reports are greatly appreciated.
Sounds not quite ready for a) learning python b) using at work. <sigh> Learning python is still the next project, but I won't be using it for this concept.


Splitting traffic with an F5 BigIP LTM iRule

Another item filed under "notetoself" - how to split traffic by URI with an iRule applied to a virtual server on an F5 BigIP LTM.

set default_pool [LB::server pool]
if { [HTTP::uri] starts_with "/path/to/split/off" } {
pool pool_to_split_to
} else {
pool $default_pool
Normally, I am against this type of hack. I believe that content should have a unique location - if there's two URLs that can get you to the same bit of content, people will use them interchangeably and it will cause nothing but headaches. However, in this case the $default_pool content is a Tomcat stack running a custom framework backed by Oracle and the pool_to_split_to is a LAMP stack running drupal backed by MySQL e.g. they couldn't be any different. This is the best way to unify the URL to access both without creating unnecessary extra hops across the network (say, using apache's mod_proxy_http).


Autoscaling revisited

Shortly after writing my previous post about AWS autoscaling, Amazon updated the autoscaling methodology. Instead of triggers they now use autoscaling policies and alarms from CloudWatch to initiate the policy actions. So here's how I create and remove policies and alarms from an autoscaling group. Note: you can't have both triggers and policies on a group, you have to remove the triggers first before adding the policies.

# create policies that will scale the group up and down
# note: cooldown is how many seconds to wait before
# applying the policy again
export COOLDOWN=300
export SCALEUP=`as-put-scaling-policy $ASGROUP-scaleUp \
--auto-scaling-group $ASGROUP \
--cooldown $COOLDOWN \
--adjustment=1 \
--type ChangeInCapacity`
if [ $? -eq 0 ]; then echo OK - $SCALEUP; else echo ERROR; fi

export SCALEDOWN=`as-put-scaling-policy $ASGROUP-scaleDown \
--auto-scaling-group $ASGROUP \
--cooldown $COOLDOWN \
--adjustment=-1 \
--type ChangeInCapacity`
if [ $? -eq 0 ]; then echo OK - $SCALEDOWN; else echo ERROR; fi

# create alarms to implement policies

# example: Latency on the ELB
mon-put-metric-alarm \
--alarm-name $ASGROUP-HighLatency \
--namespace "AWS/ELB" \
--metric-name Latency \
--statistic Average \
--period 60 \
--comparison-operator GreaterThanThreshold \
--threshold 5.0 \
--unit Seconds \
--evaluation-periods 5 \
--dimensions "LoadBalancerName=$LBNAME" \
--alarm-actions $SCALEUP
if [ $? -eq 0 ]; then echo OK; else echo ERROR; fi

mon-put-metric-alarm \
--alarm-name $ASGROUP-LowLatency \
--namespace "AWS/ELB" \
--metric-name Latency \
--statistic Average \
--period 60 \
--comparison-operator LessThanThreshold \
--threshold 0.5 \
--unit Seconds \
--evaluation-periods 5 \
--dimensions "LoadBalancerName=$LBNAME" \
--alarm-actions $SCALEDOWN
if [ $? -eq 0 ]; then echo OK; else echo ERROR; fi

And to clean up:

mon-delete-alarms --alarm-name $ASGROUP-HighLatency --force
mon-delete-alarms --alarm-name $ASGROUP-LowLatency --force
as-delete-policy $ASGROUP-scaleUp --auto-scaling-group $ASGROUP --force
as-delete-policy $ASGROUP-scaleDown --auto-scaling-group $ASGROUP --force

Ratings and Recommendations by outbrain