Fix for Ansible “detected unhandled Python exception” via “‘module object has no attribute DEFAULT_LOCAL_TMP”

This one stumped me for a few minutes when all of a sudden when Ansible couldn’t run on this system. After a quick strace pointed to the fact that Ansible was using :


which was not from the 2.1 version I had installed from a EPEL RPM (the 2.0 was built locally on this system prior to the 2.1 update) . There are lots of ways to clean this up properly  but since this was a non-prod system and I was in a hurry I just :

rm -rf /usr/lib/python2.7/site-packages/ansible-2.0.0-py2.7.egg/ (after testing its removal didn’t break other things too badly of course…you can always just MV to a .old or whatever your preference is)

and recorded my findings for the next google traveler to find and perhaps cleanup properly.

Fail2ban filters added to my Gitlab

I needed a place to drop my few fail2ban filters as I am starting to grow them for some new personal projects so I thought I would share in the event that someone else finds benefits. These are currently on my gitlab server at Enjoy!

nbd-client CLI Easter Egg

In trying to uncover the version of a CLI tool I uncovered this small Easter Egg.

# nbd-client -v
nbd-client: unrecognized option '-v'
E: option eaten by 42 mice

Just a quick note for others to enjoy…

Learning to love my I/O , a Raspi NBD retrospective.


After struggling with IO issues on my Raspi I decided I would try something a little less conventional. This was to use a network block device (NBD) served from a CentOS 7 box to my Pidora21 Raspi2 system.

I tested the NBD with FIO and found this

CPU (more through testing coming soon)

  • 100% = 100% of all cores not 100% per core
  • FIO run from 3ed server with FIO listening in server mode on the client. There was some overhead running FIO on the client which was about 5-8%

CPU Load with SDCard IO

  • Server Load : N/A
  • Client Load : 30-40%

NBD Load

  • Server Load : (J1900 Celeron) Approx 3% (nbd-server 1025 /mnt/$FILE.NBD)
  • Client Load : (Raspi 2 – ARMv7 Processor rev 5) Approx 45-55%  (nbd-client $NBD_SERVER 1025 /dev/nbd0) a decent part of this was IRQ load which is harder to balance on a Pi


FIO Config


I mostly focus on Max Bytes (maxb) but the other results do have some minor significance so I left them.

Micro-SDCard on Pi (had to space tests far apart to let the card “catch up”)

 #SEQ Write
 WRITE: io=61956KB, aggrb=1031KB/s, minb=1031KB/s, maxb=1031KB/s, mint=60070msec, maxt=60070msec
#Random Write
 WRITE: io=140372KB, aggrb=2333KB/s, minb=2333KB/s, maxb=2333KB/s, mint=60148msec, maxt=60148msec
#SEQ Read
 READ: io=967888KB, aggrb=16106KB/s, minb=16106KB/s, maxb=16106KB/s, mint=60093msec, maxt=60093msec
#Random Read
 READ: io=315660KB, aggrb=5260KB/s, minb=5260KB/s, maxb=5260KB/s, mint=60002msec, maxt=60002msec

Pi 100Mb Ethernet

#SEQ Write
READ: io=262144KB, aggrb=9798KB/s, minb=9798KB/s, maxb=9798KB/s, mint=26753msec, maxt=26753msec
#Random Write
READ: io=262144KB, aggrb=3258KB/s, minb=3258KB/s, maxb=3258KB/s, mint=80437msec, maxt=80437msec
#SEQ Read
WRITE: io=262144KB, aggrb=19156KB/s, minb=19156KB/s, maxb=19156KB/s, mint=13684msec, maxt=13684msec
#Random Read
WRITE: io=262144KB, aggrb=12867KB/s, minb=12867KB/s, maxb=12867KB/s, mint=20372msec, maxt=20372msec

As you can see aside from Sequential Read the NBD kills the MicroSD Card in the Pi. As with any test your results WILL vary. 

Steps to set this up

  1. (Both) Make sure both boxes have NBD kernel modules built and the tools installed (CentOS dropped support for NBD a while back, but Fedora still has it)
    1. (Build module walkthrough here )
    2. (Tools) yum install nbd
      1.  RH/CENT7  EPEL
      2. Fedora its in default repos
    3. (Verify Module is loaded)
      modprobe nbd && lsmod | grep nbd
      nbd 9421 1 
  2. (Server) Create a file using ‘dd’ via these steps :
    1. (Make File) : dd if=/dev/zero of=$FILE bs=1G count=$NUMBER_of_Gigs_You_Want (you can also argue you can use fallocate here if you understand what that changes)
    2. (Make FS on File) : ‘(mkfs.xfs | mkfs.ext4 | $WHATEVER_YOU_PREFER) $FILE’
    3. (OPTIONAL – Verify FS) file $FILE (would return something like “$FILE_PATH_AND_NAME: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)” )
  3. (Server) Create a script to start this at boot ‘nbd-server $PORT /path/to/file/to/export’ or use ‘/etc/sysconfig/nbd-server’ and start nbd-server however you prefer.
  4. (Client) Start a nbd-client similar with syntax similar to this ‘nbd-client $NBD_SERVER_HOST  $PORT_OF_NBD_SERVER /dev/nbd0’ example ‘nbd-client NBDSERVER.EXAMPLE.COM 2929 /dev/nbd0’  UPDATE:

    For newer versions of NBD you can use this to use a default port of  10809 

    1. nbd-server -C /path//to/nbd-server/config
    2. nbd-client $NBD_SERVER  /dev/nbd0 -N $SECTION_NAME_IN_SERVER_CONFIG
  5. (Client) Mount NBD locally via ‘mount /dev/nbd$NUMBER_FROM_NBDCLIENT /PATH/TO/MOUNT’ example ‘mount /dev/nbd0 /usr/local/NBD/’


  •  If you use your Pi’s NIC for other network related tasks this could cause issues if your NBD is under high load. I used a USB ethernet device on my Pi for other basic network needs so the ethernet is mostly allocated for the NBD and management.
  • This is not perfect and should not be used for mission critical data ! This was used to get space/speed in a pinch for a project. If used in anything resembling production a lot more work would need to be done or more robust solutions should be considered !!
  • The results off the SDcard varied wildly when compared to the NBD but it was never close in any one area outside of SEQ Read.
  • NBD performance will vary based on what the server is using for storage/network and the load it is under. Your results WILL vary !

Adventures with DD

In my humble opinion when it comes to CLI tools `dd` ranks pretty high in my toolkit. Right there with `nc` and a few other ‘ole favorites.

Today I wanted to show a quick hack on how to determine information about a block device using dd. This goes along the same path as  previous post about using `dd` to get LVM configuration off devices.

Well enough talk lets get down to the CLI:


# Thanks to file(1) magic(5)
# Example 1 - CentOS 6 LVM2
>dd if=/dev/sda3 of=/tmp/info bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.0190352 s, 26.9 MB/s
You have new mail in /var/spool/mail/root
> file /tmp/info
/tmp/info: LVM2 (Linux Logical Volume Manager) , UUID: M7pUi7daFBsEQ95UNJUd5pN604qSa0Z

#Example 2 - CentOS 7 LVM2
> dd if=/dev/sda2 of=/tmp/info bs=4k count=100
100+0 records in
100+0 records out
409600 bytes (410 kB) copied, 0.00487443 s, 84.0 MB/s
> file /tmp/info
/tmp/info: LVM2 PV (Linux Logical Volume Manager), UUID: fYcfT0-3Oyk-J87h-A6SV-rYCc-q1of-zhx7Fd, size: 119508303872

#Example 3 - CentOS 7 XFS
> dd if=/dev/sda1 of=/tmp/info bs=4k count=100
100+0 records in
100+0 records out
409600 bytes (410 kB) copied, 0.00990939 s, 41.3 MB/s
> file /tmp/info
/tmp/info: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)

#Example 4 - CentOS 7 BTRFS
> dd if=/dev/vdb2 of=/tmp/info bs=4k count=100
100+0 records in
100+0 records out
409600 bytes (410 kB) copied, 0.00324626 s, 126 MB/s
> file /tmp/info
/tmp/info: BTRFS Filesystem (label "----_fs", sectorsize 4096, nodesize 16384, leafsize 16384)

#Example 5 - CentOS 6 - EXT4
> dd if=/dev/sda1 of=/tmp/info bs=4k count=100
100+0 records in
100+0 records out
409600 bytes (410 kB) copied, 0.0132642 s, 30.9 MB/s
> file /tmp/info
/tmp/info: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (huge files)

I always forget to test this when I have more “esoteric” filesystems in my home lab. If anyone can and post a comment with the results from other filesystems’s or distros I would appreciate it!

Yum has error on primary.xml.gz or similar when behind squid. (clearing and reloading a URL in Squid)

Verify the entry is getting hit in your cache by watching your logs

TCP_MEM_HIT/200 1597 GET – HIER_NONE/- application/x-gzip (or similar)

To reload this entry run

squidclient -r


To clear this entry

squidclient -m PURGE

You can also whitelist this in  several ways if you do not want this to be cached or do not want it cached for long (ie 24 hours) but I will save that for another post.

Caching packages and yum related information in squid can be helpful if you have a large number of machines behind a company/personal proxy and have limited (either in speed or monthly amounts) bandwidth at your location. While running a local mirror is preferred for many reasons this could fill that role till such time as you feel you need the complexity of local mirror.

Linux Brain Teaser – Fall 2014

This was a fun problem exchanged with a coworker. The problem is simple run this :

ssh $USER@$HOST history 

and have it return the remote history. Lots of ways to solve this one. If you want to confirm you solved it without giving it away just email/PM me. Otherwise show your work in the comments section. Since I have such ridiculously low traffic I will post the answer in 2015!

Quick Hack to use DD to get LVM configs

This is more of a exercise than a real world use example. I have used something similar in a disater recovery situation so it does have some merit ..but good planning can prevent needing hacks like this.


for i in $(pvdisplay | awk '/PV Name/ {print $3}'); do
dd if=$i of=/tmp/lvm_config_for_$(echo $i |sed 's/\//_/g') bs=$(stat -f -c %s $i) count=10;

KVM Local Network VMs TCP Congestion Algorithm on CentOS 6.5

First off I have no idea what made me want to test this…likely because it was easy to script and let run a few times while I went off and did other things.

What I am doing is testing network bandwidth between two KVM VMs on a single system using a bridge, the virtio network driver, and CentOS 6.5 as the Client and Host OS.

A little info on how to reproduce this. To find what your system supports you run

ls /lib/modules/`uname -r`/kernel/net/ipv4/

and look for tcp_* to see what to populate in the script….

Then take 30 seconds to write a little for loop(s) with these ….


#Clear Log....
echo "" > $LOG

for i in cubic bic highspeed htcp hybla illinois lp scalable vegas veno westwood yeah; do     
    for s in {1..10}; do
        echo "Testing with : $i" >> $LOG
        echo "Run $s" >> $LOG
        iperf -Z $i -c sec01 >> $LOG

Results are all in Mbps and are based off 20 total runs almost back to back. These are VERY inconclusive but at least warrants more testing.

UPDATE – I am doing more testing and found that the longer run times (120 seconds vs 10) is showing MUCH more consistent numbers.

It appears that vegas and Yeah dont like the way these local bridged networks handle cwnd’s and ssthresh among other things.

It also might be worth further testing to see how/if these affect RTT among other network variables.


Fixes for named (bind) errors

I ran into a few errors during load testing on my bind server the other day and found ways to quickly fix them. Your mileage may vary but for me these helped. Note these are just the configuration names with no settings ! I did this so you can evaluate whats best for your system!

I will make this my default post for these errors and the config changes I did to fix them:

ISSUE: named[1698]: dispatch 0x7fb0180cd990: open_socket( -> permission denied: continuing
FIX: raise the range of ports in `use-v4-udp-ports` make sure that range does not overlap with existing UDP services.

ISSUE: named[932]: clients-per-query increased to 20 (or other number)
FIX: Raise `max-clients-per-query` and `clients-per-query` a “0” will set to unlimited. Be careful of this due to resource exhaustion!

ISSUE:DNS queries timeout/lost under load (dnsperf or other tool can show this).
FIX: (OS) set “net.core.rmem_max” and “net.core.rmem_default”