Archive for the ‘Linux’ Category.

Spammers adopt Facebook headers ?

I saw these yet again today on a spam that found its way into a spam trap I have.

X-Priority: 3
X-Mailer: ZuckMail [version 1.00]
X-Facebook-Notify: password_reset; mailid=
Errors-To: terrace45@rotortug.com
X-FACEBOOK-PRIORITY: 1
MIME-Version: 1.0

The offending sender is (91.90.12.239) which , surprise surprise, isn’t a Facebook IP. I am working on a Spamassassin rule for this if anyone wants a “beta” copy of the meta rule let me know.

Ok I tested it and it appears to be working. The last 24 hours have seen over 100 hits all obvious spam (total volume during that time was 564K). The meta rule I am using is:

header CS_881                   X-Mailer =~ /\bZuckMail\b/i
header CS_882                   Received !~ /\bfacebook.com\b/i
meta FAKEFACEBOOK_01            (CS_881 && CS_882)
score FAKEFACEBOOK_01           3.9

Change the header names,meta names, and score to reflect what you feel is best for your system.

Google Wifi blunder or wake up call ?

Love it or hate it google has around 600GB or captured wifi data  ! I do not want to get into war of opinions about this but I do think this should be a wake up call for everyone using wireless who has not secured their network to do so.

If you dont know how to do this find a family member, neighbor, Nerd Herd person, Google search,  or whoever to do it for you (even if it costs money). Aside from keeping your neighbors “out of your bushes” it will help keep your data safe from random war  Google driving.

RHEL 6 Beta ..I finally am finding the time!

[root@localhost ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.0 Beta (Santiago)

LET THE FLOGGINGS (benchmarks) BEGIN!

Update: Appears the project iotop that I have been eyeing for sometime is now a option in the standard install. This is very nice I hope they choose some of the other system tools (like updated dstat, atop, and htop) to add as well.

Update: Ran a ps finally (I know I am slow getting to these kind of important things these days) and I noticed sendmail is gone ! Oh my now thats a change for the better. After 12 years of living with it as the default mailer I am excited to get to know postfix ! I have run it before in the past but 12 years running Qmail doesnt leave one much time to play around with other mailers. I am not biased I think the top two MTAs are Qmail and Postfix so I cant complain about the choice.

Atoms vs GlusterFS (and why I like MSI-X)

Well I am in the process of testing clusterFS solutions for work and figured it would also be a great chance to put our Atom product (SuperMicro’s 5015A-H) through the paces. During the testing phase, as seen here, I found that tests would need more horsepower to really be worthwhile.

I did find out that the Atoms can do about 36MBs a second using Replicated GlusterFS with caching tweaks before the processor max out. On my newer MSI-X systems the Ethernet load would have been pushed around several (or all) cores but since the Atoms only have MSI capabilities on their NICs I watched as one CPU pegged and other other one sat back and drank a Mojito. I was almost tempted into firing up a few high CPU tasks and using ole taskset to make the other CPU suffer but I refrained.

I will be updating once I can get a more powerful test setup going.

Apache achieves Leet-ness

Was troubleshooting a coding issue for a customer the other day when I noticed the number of requests currently being processed. I am going to write about this in the coming days. Basically there was a code issue (we do not support code for this customer) that caused a large number of “sleep” states in MySQL as well as a huge number of ‘W Sending Reply’ states in Apache.

Current Time: Thursday, 11-Feb-2010 11:47:54 CST
Restart Time: Thursday, 11-Feb-2010 09:17:12 CST
Parent Server Generation: 0
Server uptime: 2 hours 30 minutes 41 seconds
Total accesses: 226733 – Total Traffic: 10.8 GB
CPU Usage: u1812.56 s80.08 cu.04 cs0 – 20.9% CPU load
25.1 requests/sec – 1.2 MB/second – 49.8 kB/request
1337 requests currently being processed, 113 idle workers

5 Minute Disk I/O in KB

THIS IS LEGACY I HAVE REPLACED IT. I am leaving this up for archival purposes as it does still work just not as accurately as I would like.
OS: CentOS 5.4
Arch : x86_64
Version : 0.1a
Required Packages : netsnmpd, sysstat
snmpd conf addition:
exec .1.3.6.1.4.1.2021.40 SARDISKIO /usr/bin/mrtg_diskio
Client Script Used:
<script>
#!/bin/bash
#V 0.1b MisterX Dec 9th 2009
# Replace md0-md3 with the drives you want to watch
#OUTPUT example for first Drive :
#1.3.6.1.4.1.2021.40.101.1 tps
#1.3.6.1.4.1.2021.40.101.2 kB_read/s
#1.3.6.1.4.1.2021.40.1.1.3 kB_wrtn/s
#1.3.6.1.4.1.2021.40.101.4 kB_read
#1.3.6.1.4.1.2021.40.101.5 kB_wrtn
# and so on for each additional Drive. A snmpwalk -v2c -On -c $community $host 1.3.6.1.4.1.2021.40.101 will show
#you the full list for all drives.
for d in md0 md1 md2 md3; do

/usr/bin/iostat -dk | grep $d | awk '{print $2}';
/usr/bin/iostat -dk | grep $d | awk '{print $3}';
/usr/bin/iostat -dk | grep $d | awk '{print $4}';
/usr/bin/iostat -dk | grep $d | awk '{print $5}';
/usr/bin/iostat -dk | grep $d | awk '{print $6}';

done
</script>

MRTG Code :

<mrtg_config>
Target[$server_name-disk]: 1.3.6.1.4.1.2021.40.101.4&1.3.6.1.4.1.2021.40.101.5:$community@$remote_server
Title[$server_name-disk]:  Disk $drive 5 min Average I/O Utilization
MaxBytes[$server_name-disk]: 10240000000000000
PageTop[$server_name-disk]: <H1>5 min Avg. I/O Utilization Report</H1>
kmg[$server_name-disk]: KB,MB,GB
LegendI[$server_name-disk]: 5 min Avg. I/O KBread
LegendO[$server_name-disk]: 5 min Avg. I/O KBwrite
Legend1[$server_name-disk]: 5 min Avg. I/O KBread
Legend2[$server_name-disk]: 5 min Avg. I/O KBwrite
YLegend[$server_name-disk]: Kilobytes
ShortLegend[$server_name-disk]: &
Options[$server_name-disk]: growright,nopercent
</mrtg_config>

This is just a average to get a overview and will never replace good administration.
Its only to track trends and general issues but if you use MRTG you probably have other tools you use and already know this ;)

Debugging Apache segfault with strace

OS: CentOS 4.8
Apache : Custom RPM from source with only a single change to the location of the suexec directory

strace -t -f -v  -p $process -o /path/to/outputfile (note the $process is the primary Apache Process)

To find the primary Apache Process you do a :

ps -ef | grep httpd

and it returns something like this :

apache   26898 22378  8 13:50 ?        00:00:01 /usr/sbin/httpd -k start

the second number 22378 is the PID of the Apache parent process. I then waited for a :

Dec 11 10:02:20 web02 kernel: httpd[7121]: segfault at 0000007fbf3fff0c rip 0000002a9567344a rsp 0000007fbf3ffe90 error 6

in my /var/log/messages. Once that came I did a:

grep SIGSEGV /path/to/file_generated_w/strace

and noted times and PIDs. Here is a example output :

19730 12:07:35 — SIGSEGV (Segmentation fault) @ 0 (0) —
19784 12:08:56 — SIGSEGV (Segmentation fault) @ 0 (0) —

I then grepped out the PIDs (19784 and 19730 in the above example) with a segfault to different files and began reading. To grep this out I did :

grep 19730 /path/to/file_generated_w/strace > /tmp/out.19730

It was in these files I found my problem. Your mileage may vary but I found this method much easier than using the Apache config setting of CoreDumpDirectory which requires several changes that have to be undone. The CoreDumpDirectory setting also requires a few restarts of Apache which in a production environment can be undesired.

The main caveat to using strace is that , on a busy server, you can generate 100-300M of logs per minute so make sure you have the diskspace on the partition you are sending out strace output.

Linux or Windows

To throw my .02$ into this fray I say:

Linux is for work and Windows is for play :)

But seriously I have been amazing at how many free tools Linux includes with the OS (or via the net/repos) for free.

Then there is the issue of the constantly evolving Windows “shell” which keeps changing commands almost every major release of the OS. With Linux I can run basically the same command,with little or no changes, that I ran almost 15 years ago when I started playing around the OS. If Windows did this I would have much more respect for them and their OS.

sysbench – Xeon X5550 16G

I got the chance to finally do some benchmarking on the new X5550 Xeons. Here is what I came up with using sysbench.

System:
CPU:2x X5550 Xeons (8 cores)
RAM: 16G
Hardware Vendor (Model): Dell (R510)
OS: CentOS release 5.4 (Final)
Kernel : 2.6.18-164.6.1.el5 x86_64
HardDrive(s): OS - Raid 1  / /var/lib/mysql Raid10
Harddrive Controller: Perc6

my.cnf
innodb_log_group_home_dir=/var/log/innodb_logs
innodb_log_file_size=256M
innodb_log_files_in_group=2
innodb_buffer_pool_size=6G
innodb_additional_mem_pool_size=60M
innodb_log_buffer_size=4M
innodb_thread_concurrency=0 #As of MYSQL 5.0.19 0 makes this unlimited
innodb_file_per_table=1
innodb_flush_log_at_trx_commit=2 #Risky but not a worry for this customer due to mainly static data. 

Sysbench Command :
sysbench --test=oltp --db-driver=mysql --num-threads=16
--mysql-user=root --max-time=60 --max-requests=0 --oltp-read-only=on

Test Result (subsequent tests were withing a small percentage of this resulte)
OLTP test statistics:
    queries performed:
        read:                            4457866
        write:                           0
        other:                           636838
        total:                           5094704
    transactions:                        318419 (5306.75 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 4457866 (74294.56 per sec.)
    other operations:                    636838 (10613.51 per sec.)
Test execution summary:
 total time:                          60.0026s
 total number of events:              318419
 total time taken by event execution: 958.0739
 per-request statistics:
 min:                                  0.98ms
 avg:                                  3.01ms
 max:                                334.80ms
 approx.  95 percentile:              10.86ms

Threads fairness:
 events (avg/stddev):           19901.1875/1010.54
 execution time (avg/stddev):   59.8796/0.03

sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=1 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=128M --file-test-mode=rndrd run
sysbench 0.4.10:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 1
Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
Done. Operations performed:  910652 Read, 0 Write, 0 Other = 910652 Total Read 13.895Gb  Written 0b  Total transferred 13.895Gb  (237.15Mb/sec) 15177.50 Requests/sec executed