An oddity picked up today: snmpwalking a Netgear ReadyNAS from net-snmp 5.5 gives significantly different results when compared to the output of snmpwalk from net-snmp 5.4.2.1:
From an Ubuntu 10.04 machine
$ snmpwalk --version
NET-SNMP version: 5.4.2.1
$ snmpwalk -v1 -cpublic 10.1.1.207 enterprises.4526.18.7.1
SNMPv2-SMI::enterprises.4526.18.7.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.7.1.2.1 = STRING: "Volume C"
SNMPv2-SMI::enterprises.4526.18.7.1.3.1 = STRING: "RAID Level X"
SNMPv2-SMI::enterprises.4526.18.7.1.4.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.7.1.5.1 = INTEGER: 4262912
SNMPv2-SMI::enterprises.4526.18.7.1.6.1 = INTEGER: 3403776
From a FreeBSD 8.1 machine
[root@vm-fbsd81 ~]# snmpwalk --version
NET-SNMP version: 5.5
[root@vm-fbsd81 ~]# snmpwalk -v1 -cpublic 10.1.1.207 enterprises.4526.18.7.1
SNMPv2-SMI::enterprises.4526.18.7.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.7.1.2.1 = STRING: "Volume C"
SNMPv2-SMI::enterprises.4526.18.7.1.3.1 = STRING: "RAID Level X"
SNMPv2-SMI::enterprises.4526.18.7.1.4.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.7.1.5.1 = INTEGER: 4262912
SNMPv2-SMI::enterprises.4526.18.7.1.6.1 = INTEGER: 3403776
NET-SNMP-AGENT-MIB::nsModuleName."".1.0.0 = STRING:
NET-SNMP-AGENT-MIB::nsModuleName."".1.1.0 = STRING:
NET-SNMP-AGENT-MIB::nsModuleName."".1.2.0 = STRING:
NET-SNMP-AGENT-MIB::nsModuleName."".7.1.3.6.1.2.1.4.127 = STRING: ip
NET-SNMP-AGENT-MIB::nsModuleName."".7.1.3.6.1.2.1.5.127 = STRING: icmp
NET-SNMP-AGENT-MIB::nsModuleName."".7.1.3.6.1.2.1.6.127 = STRING: tcp
[snip another 1300 lines]
And I even tried net-snmp 5.5 for Windows, which had the same results as the FreeBSD example.
Something has changed - perhaps there's a bug in the ReadyNAS MIB that was undetected by net-snmp 5.4.x, but net-snmp 5.5 is being strict about it? Perhaps there's a bug in net-snmp 5.5? I don't know enough to say for sure.
I do computer stuff. When I solve a problem and it's not obvious, I document it here - it's a reference for myself, and hopefully, it can save other people out there days of problem solving, googling and assorted hair-tearing.
Monday, December 13, 2010
Sunday, November 28, 2010
ReadyNAS SNMP agent dies... aaarrgggh
If you've been following along, you'll be aware that I set up Nagios monitoring of our ReadyNAS units via SNMP. Happiness ensues! Until Nagios starts spitting out warnings:
Oh crud. The box is still happily clicking along... responding to pings, frontview (web management interface) is still working. And what's strangest is that most of SNMP is still responding:
But when you try to walk the ReadyNAS-specific section of the MIB:
Hmmm... taking a shrewd guess, the ReadyNAS section of the MIB is probably implemented as a sub-agent, and that sub-agent has died. Let's have a poke around... reading /etc/init.d/snmpd, sure enough, that script starts up the usual snmpd and snmptrapd AND /usr/sbin/readynas-agent - ahah, so is this process running?
So a quick solution:
Now check that the ReadyNAS MIB works again:
Yep, that's got it. Now to follow up: why does readynas-agent crash?
readyNAS temp is UNKNOWN SNMP problem - No data received from host
Oh crud. The box is still happily clicking along... responding to pings, frontview (web management interface) is still working. And what's strangest is that most of SNMP is still responding:
$ snmpwalk -v1 -cpublic em-nas system SNMPv2-MIB::sysDescr.0 = STRING: Linux em-nas 2.6.17.14ReadyNAS #1 Wed Sep 22 04:42:09 PDT 2010 padre SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (50273528) 5 days, 19:38:55.28 SNMPv2-MIB::sysContact.0 = STRING: root [snip all the exciting rest of the system section of the MIB]
But when you try to walk the ReadyNAS-specific section of the MIB:
$ snmpwalk -v1 -cpublic em-nas enterprises.4526 [nothing!]
Hmmm... taking a shrewd guess, the ReadyNAS section of the MIB is probably implemented as a sub-agent, and that sub-agent has died. Let's have a poke around... reading /etc/init.d/snmpd, sure enough, that script starts up the usual snmpd and snmptrapd AND /usr/sbin/readynas-agent - ahah, so is this process running?
em-nas:~# ps axwu | grep [a]gent [nothing!]
So a quick solution:
em-nas:~# /etc/init.d/snmpd restart em-nas:~# ps axwu | grep [a]gent root 29772 0.1 1.3 9600 3168 ? S 09:28 0:00 /usr/sbin/readynas-agent
Now check that the ReadyNAS MIB works again:
$ snmpwalk -v1 -cpublic em-nas enterprises.4526.18.7.1 SNMPv2-SMI::enterprises.4526.18.7.1.1.1 = INTEGER: 1 SNMPv2-SMI::enterprises.4526.18.7.1.2.1 = STRING: "Volume C" SNMPv2-SMI::enterprises.4526.18.7.1.3.1 = STRING: "RAID Level X" SNMPv2-SMI::enterprises.4526.18.7.1.4.1 = STRING: "ok" SNMPv2-SMI::enterprises.4526.18.7.1.5.1 = INTEGER: 2837504 SNMPv2-SMI::enterprises.4526.18.7.1.6.1 = INTEGER: 2177024
Yep, that's got it. Now to follow up: why does readynas-agent crash?
Thursday, November 25, 2010
g15macro on Ubuntu 10.04.1
I have this fancy G15 keyboard from logitech, which (theoretically) should allow me to record macros bound to the extra G keys. What's supposed to happen is that you install the g15macro package (and some dependencies), set g15macro as a Startup Application. Then you can hit the MR (macro record?) key, type a macro, hit a G key and viola! your macro is bound to a G key. For an LDAP dude like me, this could save me typing 'ou=People,dc=example,dc=com,dc=au' a few times a day. Sounds like it's full of win!
Of course, things are rarely that easy. I simply never could get it to work. Once I tried running it form the terminal, I could see that after its first run (once it had written out ~/.g15macro/g15macro.conf) it would segfault as soon as it was run. Bugger!
Soon enough, googling around found me plenty of links showing that the answer was to modify g15macro.c, comment out a line that was causing the crash, and hey pesto!
Being a moderately experienced Linux user, I tried to install the g15macro-dev package... nope, not found in the Ubuntu repos. OK, download source, uninstall the packaged g15macro, compile and install new version. Along the way i hit a few dependencies for the compilation:
Easy as pi, though compiling software from tarballs rather than using apt-get makes me feel like I've travelled back in time.
Of course, things are rarely that easy. I simply never could get it to work. Once I tried running it form the terminal, I could see that after its first run (once it had written out ~/.g15macro/g15macro.conf) it would segfault as soon as it was run. Bugger!
Soon enough, googling around found me plenty of links showing that the answer was to modify g15macro.c, comment out a line that was causing the crash, and hey pesto!
Being a moderately experienced Linux user, I tried to install the g15macro-dev package... nope, not found in the Ubuntu repos. OK, download source, uninstall the packaged g15macro, compile and install new version. Along the way i hit a few dependencies for the compilation:
sudo apt-get remove g15macro sudo apt-get install libg15daemon-client-dev libg15-dev libg15render-dev sudo apt-get install libfreetype6-dev libxtst-dev cd Downloads/g15macro-1.0.3/ ./configure make check sudo make install
Easy as pi, though compiling software from tarballs rather than using apt-get makes me feel like I've travelled back in time.
Wednesday, November 17, 2010
Bash scripting, test and OR
Rsync exits with 0 if the transfer was successful, or a non-zero value if there was a "problem". I say "problem" with quoties because one condition that can (and does) occur regularly on our mail server is when mail files get deleted while the backup is running - they "vanish". And this isn't really an error, but rsync exits with error code 24... and if your backup script does this:
...you end up with spurious error reports giving you high blood pressure.
So the obvious solution is to make the if condition for error code 0 or 24, and only spit out "Oh Noes" if it was something different. It took me a while to get the syntax right:
rsync --lots-of-options $SRC $DEST EXIT=$? if [ $EXIT -eq 0 ] then echo "Yay, we are all good: [$EXIT]" else echo "Oh noes, bad things happened: [$EXIT]" fi
...you end up with spurious error reports giving you high blood pressure.
So the obvious solution is to make the if condition for error code 0 or 24, and only spit out "Oh Noes" if it was something different. It took me a while to get the syntax right:
if [ \( $EXIT -eq 0 -o $EXIT -eq 24 \) ]
Wednesday, October 27, 2010
Cacti and rrdtool
After a few hours' worth of wrangling, I think I've tamed rrdtool into producing a sane graph for devices that report temperatures in degrees C multiplied by 10:
/usr/bin/rrdtool graph test.png --imgformat=PNG --start=-86400 --end=-300 --title="rm-mon-1 - Rack A18" --base=1000 --height=120 --width=500 --alt-autoscale-max --lower-limit=0 --vertical-label="degrees C x 10" --font TITLE:12: --font AXIS:8: --font LEGEND:10: --font UNIT:8: DEF:a=rra/rm-mon-1_snmp_oid_138.rrd:snmp_oid:AVERAGE DEF:b="rra/rm-mon-1_snmp_oid_138.rrd":snmp_oid:MAX CDEF:cdefa=a,0.1,'*' CDEF:cdefb=b,10,/ LINE:cdefa#F50000FF:"degrees C" GPRINT:cdefa:LAST:"Current\:%8.2lf %s" GPRINT:cdefa:AVERAGE:"Average\:%8.2lf %s" GPRINT:cdefb:MAX:"Maximum\:%8.2lf %s\n"
The magic is in the CDEF statements, which declares a variable (for example, cdefa) then assigns to it the value of a with a RPN modifier - in this case 10,/ (divide by 10)
The other magic is to then remember to USE the newly-assigned cdefa rather than straight a as values used by LINE and GPRINT statements (it took me a while to realise that I was happily assigning the correct value to cdefa and cdefb and then never using them.
I'm yet to figure out how to wrangle this data into cacti - so far I'm just fooling around in bash. I'm sure I'll figure it out... later.
For bonus points:
/usr/bin/rrdtool graph test.png --imgformat=PNG --start=-86400 --end=-300 --title="rm-mon-1 - Rack A18" --base=1000 --height=120 --width=500 --alt-autoscale-max --lower-limit=0 --vertical-label="degrees C x 10" --font TITLE:12: --font AXIS:8: --font LEGEND:10: --font UNIT:8: DEF:a=rra/rm-mon-1_snmp_oid_138.rrd:snmp_oid:AVERAGE DEF:b="rra/rm-mon-1_snmp_oid_138.rrd":snmp_oid:MAX CDEF:cdefa=a,0.1,'*' CDEF:cdefb=b,10,/ LINE:cdefa#F50000FF:"degrees C" GPRINT:cdefa:LAST:"Current\:%8.2lf %s" GPRINT:cdefa:AVERAGE:"Average\:%8.2lf %s" GPRINT:cdefb:MAX:"Maximum\:%8.2lf %s\n"
The magic is in the CDEF statements, which declares a variable (for example, cdefa) then assigns to it the value of a with a RPN modifier - in this case 10,/ (divide by 10)
The other magic is to then remember to USE the newly-assigned cdefa rather than straight a as values used by LINE and GPRINT statements (it took me a while to realise that I was happily assigning the correct value to cdefa and cdefb and then never using them.
I'm yet to figure out how to wrangle this data into cacti - so far I'm just fooling around in bash. I'm sure I'll figure it out... later.
For bonus points:
- assign more meaningful variables names than a, b, cdefa and cdefb - these are the defaults I got from cacti, but they should really be r18_avg_temp_by_ten, r18_max_temp_by_ten, r18_avg_temp, r18_max_temp
- plot all related rack temps on the same single graph - they're drawn from multiple rrd files, but that should be easy enough
Labels:
cacti,
rrdtool,
see-it-was-obvious-after-all
Thursday, October 21, 2010
ReadyNAS and Cacti: grumble
Since we've been having a few temperature issues lately, I thought it might be a good time to start using our Cacti to graph temperatures, so we can see the trends (as well as the alerts which Nagios sends us). We have a SafetyNet5 which for some reason I cannot get to produce graphs... still puzzling over that one.
But we have a ReadyNAS at most of our sites, so why not get the temperature data and graph that? Good idea, right? Yep, in theory. If I ever get the chance, I'd like to ask the authors of the ReadyNAS MIB why they thought that returning temperature data as a String containing both Celcuis and Fahrenheit data was A Good Idea. For some funny reason, Cacti isn't that keen on numbers that look like this: "32.0C/89.6F"
Oh well, nice try. There is a neato solution for this on the cacti forums, but:
But we have a ReadyNAS at most of our sites, so why not get the temperature data and graph that? Good idea, right? Yep, in theory. If I ever get the chance, I'd like to ask the authors of the ReadyNAS MIB why they thought that returning temperature data as a String containing both Celcuis and Fahrenheit data was A Good Idea. For some funny reason, Cacti isn't that keen on numbers that look like this: "32.0C/89.6F"
Oh well, nice try. There is a neato solution for this on the cacti forums, but:
- it requires a more recent Cacti than we have
- it requires a more recent ReadyNAS firmware than we have
Monday, October 11, 2010
Nagios monitoring ReadyNAS via SNMP
First up: the ReadyNAS does have an SNMP implementation, but it's off by default, so go turn it on: Front Panel -> System -> Alerts, select the SNMP tab.
Test it works:
$ snmpwalk -v1 -cpublic my-nas-box-ip
SNMPv2-MIB::sysDescr.0 = STRING: Linux my-nas-box 2.6.17.8ReadyNAS #1 Tue Jun 9 13:59:28 PDT 2009 padre
[buckets of usual SNMP output omitted]
The interesting stuff is found here:
$ snmpwalk -v1 -cpublic my-nas-box-ip enterprises.4526
SNMPv2-SMI::enterprises.4526.18.1.0 = STRING: "4.01c1-p6"
[smaller buckets of output omitted]
$ snmpwalk -v1 -cpublic my-nas-box-ip enterprises.4526.18.7
SNMPv2-SMI::enterprises.4526.18.7.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.7.1.2.1 = STRING: "Volume C"
SNMPv2-SMI::enterprises.4526.18.7.1.3.1 = STRING: " RAID Level X"
SNMPv2-SMI::enterprises.4526.18.7.1.4.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.7.1.5.1 = INTEGER: 4262912
SNMPv2-SMI::enterprises.4526.18.7.1.6.1 = INTEGER: 3679876
These items are the (RAID) volume table:
1 is the first RAID volume (and indeed my only one)
Volume C is the name of the RAID volume - it's the default ReadyNAS one
RAID Level X means we're using the ReadyNAS default X-RAID
ok is the status of the volume - it's OK. Phew!
4262912 is the size in megabytes of the volume (I can't really make that tally against the actual size, I'm still puzzling over that one)
3679876 is the free space of the volume in megabytes
You can get the whole MIB here. There are plenty of other interesting things you can monitor, such as temp and fan speeds, but since we've set up email alerts if there are problems with those, I'm happy to leave them out of Nagios.
Now to set up Nagios:
Grab check_readynas_hd.pl from these guys. It only monitors RAID volume, and only the first one - that's all I wanted to monitor, so it's perfect. The code is nice and simple, so it'd be easy enough to expand it to cater for multiple volumes, maybe monitor temperatures, physical disks, if you were that way inclined. I found that the script assumed snmpwalk would be at /usr/bin/snmpwalk - not so for FreeBSD, but it wasn't too hard to hack it in as /usr/local/bin/snmpwalk
You also need the ReadyNAS MIB, so download that.
Running the script was easy:
$ /usr/local/libexec/nagios/check_readynas -H my-nas-box-ip -m /usr/local/libexec/nagios/READYNAS-MIB.txt
Volume C(RAID Level X): 4262912/3679876bytes (13% in use) STATUS: "ok"
Yay, looks good. Now add these lines to commands.cfg:
# 'check_readynas' command definition
define command{
command_name check_readynas_disk
command_line $USER1$/check_readynas -H $HOSTADDRESS$ -m /usr/local/libexec/nagios/READYNAS-MIB.txt
}
Then add these lines to the server's cfg file for nagios:
define service {
use local-service
host_name my-nas-box
service_description readyNAS RAID
check_command check_readynas_disk
}
Re-start Nagios and watch things start to come good (where "monitored" equals "good).
Test it works:
$ snmpwalk -v1 -cpublic my-nas-box-ip
SNMPv2-MIB::sysDescr.0 = STRING: Linux my-nas-box 2.6.17.8ReadyNAS #1 Tue Jun 9 13:59:28 PDT 2009 padre
[buckets of usual SNMP output omitted]
The interesting stuff is found here:
$ snmpwalk -v1 -cpublic my-nas-box-ip enterprises.4526
SNMPv2-SMI::enterprises.4526.18.1.0 = STRING: "4.01c1-p6"
[smaller buckets of output omitted]
$ snmpwalk -v1 -cpublic my-nas-box-ip enterprises.4526.18.7
SNMPv2-SMI::enterprises.4526.18.7.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.7.1.2.1 = STRING: "Volume C"
SNMPv2-SMI::enterprises.4526.18.7.1.3.1 = STRING: " RAID Level X"
SNMPv2-SMI::enterprises.4526.18.7.1.4.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.7.1.5.1 = INTEGER: 4262912
SNMPv2-SMI::enterprises.4526.18.7.1.6.1 = INTEGER: 3679876
These items are the (RAID) volume table:
1 is the first RAID volume (and indeed my only one)
Volume C is the name of the RAID volume - it's the default ReadyNAS one
RAID Level X means we're using the ReadyNAS default X-RAID
ok is the status of the volume - it's OK. Phew!
4262912 is the size in megabytes of the volume (I can't really make that tally against the actual size, I'm still puzzling over that one)
3679876 is the free space of the volume in megabytes
You can get the whole MIB here. There are plenty of other interesting things you can monitor, such as temp and fan speeds, but since we've set up email alerts if there are problems with those, I'm happy to leave them out of Nagios.
Now to set up Nagios:
Grab check_readynas_hd.pl from these guys. It only monitors RAID volume, and only the first one - that's all I wanted to monitor, so it's perfect. The code is nice and simple, so it'd be easy enough to expand it to cater for multiple volumes, maybe monitor temperatures, physical disks, if you were that way inclined. I found that the script assumed snmpwalk would be at /usr/bin/snmpwalk - not so for FreeBSD, but it wasn't too hard to hack it in as /usr/local/bin/snmpwalk
You also need the ReadyNAS MIB, so download that.
Running the script was easy:
$ /usr/local/libexec/nagios/check_readynas -H my-nas-box-ip -m /usr/local/libexec/nagios/READYNAS-MIB.txt
Volume C(RAID Level X): 4262912/3679876bytes (13% in use) STATUS: "ok"
Yay, looks good. Now add these lines to commands.cfg:
# 'check_readynas' command definition
define command{
command_name check_readynas_disk
command_line $USER1$/check_readynas -H $HOSTADDRESS$ -m /usr/local/libexec/nagios/READYNAS-MIB.txt
}
Then add these lines to the server's cfg file for nagios:
define service {
use local-service
host_name my-nas-box
service_description readyNAS RAID
check_command check_readynas_disk
}
Re-start Nagios and watch things start to come good (where "monitored" equals "good).
Sunday, October 10, 2010
Nagios monitoring Dell PE 2900 via SNMP
I decided I would like to monitor our new file server - you know, so if the RAID became degraded, I'd know... rather than lose two disks from a set like we did recently and um, lose a bit of data. Yeah, oops.
So... how hard can it be? Answer: quite hard for our Windows 2000 server. More on that later.
For Windows 2003, it wasn't too bad, but there were a few hoops to jump through. I'm documenting those hoops here for future reference:
Install OpenManage Server Administrator Managed Node (v6.3) ... ahah, but not so fast - it will probably force you to install new RAID firmware and drivers, so do that, of course, reboot... then here's the trick that got me first time around: Storage Management is deselected for installation by default, so you MUST choose a custom installation and for the love of god, select Storage Management for installation! Why would Dell do this??? Especially after making a song-and-dance that forced me to upgrade my RAID firmware... anyway...
Then to monitor via SNMP you need Windows SNMP installed (Start -> Settings -> Control Panel -> Add/Remove Programs, select "Windows Components" then "Management and Monitoring Tools", click "Details:" button and scroll down to "Simple Network Management Protocol" and make sure that's ticked. By default SNMP only allows polling from localhost (this is either good security, or absolutely stupid, depending on your point of view and level of caffeination). To allow SNMP polling from other hosts, go to the services control panel applet, find "SNMP Service", right-click, select "Properties", click the "Security" tab and either allow SNMP from all hosts, or just the hosts you choose.
Test that SNMP is working:
(this is the name of the "disk label" for virtual disk 1). You can find a list of useful info on OpenManage SNMP here.
Great. From here I could write some simple SNMP checks for Nagios, and so long as the virtualDiskRollUpStatus (1.3.6.1.4.1.674.10893.1.20.140.1.1.19.x) comes back as 3 then we can assume we're all happy. But I thought maybe some helpful soul out there might have already written something more sophisticated for monitoring OpenManage, and they surely have - I settled on check_openmanage as a nice one.
So on the nagios server, I did this:
Then re-start nagios and wait till it polls, and see nice green output. Yay!
So... how hard can it be? Answer: quite hard for our Windows 2000 server. More on that later.
For Windows 2003, it wasn't too bad, but there were a few hoops to jump through. I'm documenting those hoops here for future reference:
Install OpenManage Server Administrator Managed Node (v6.3) ... ahah, but not so fast - it will probably force you to install new RAID firmware and drivers, so do that, of course, reboot... then here's the trick that got me first time around: Storage Management is deselected for installation by default, so you MUST choose a custom installation and for the love of god, select Storage Management for installation! Why would Dell do this??? Especially after making a song-and-dance that forced me to upgrade my RAID firmware... anyway...
Then to monitor via SNMP you need Windows SNMP installed (Start -> Settings -> Control Panel -> Add/Remove Programs, select "Windows Components" then "Management and Monitoring Tools", click "Details:" button and scroll down to "Simple Network Management Protocol" and make sure that's ticked. By default SNMP only allows polling from localhost (this is either good security, or absolutely stupid, depending on your point of view and level of caffeination). To allow SNMP polling from other hosts, go to the services control panel applet, find "SNMP Service", right-click, select "Properties", click the "Security" tab and either allow SNMP from all hosts, or just the hosts you choose.
Test that SNMP is working:
$ snmpget -v 1 -c public hostname .1.3.6.1.4.1.674.10893.1.20.140.1.1.2.1
SNMPv2-SMI::enterprises.674.10893.1.20.140.1.1.2.1 = STRING: "System"
(this is the name of the "disk label" for virtual disk 1). You can find a list of useful info on OpenManage SNMP here.
Great. From here I could write some simple SNMP checks for Nagios, and so long as the virtualDiskRollUpStatus (1.3.6.1.4.1.674.10893.1.20.140.1.1.19.x) comes back as 3 then we can assume we're all happy. But I thought maybe some helpful soul out there might have already written something more sophisticated for monitoring OpenManage, and they surely have - I settled on check_openmanage as a nice one.
So on the nagios server, I did this:
# cd /usr/local/libexec/nagios/
# wget http://folk.uio.no/trondham/software/check_openmanage-3.6.0/check_openmanage
# chmod +x check_openmanage
# ./check_openmanage -H my-server
OK - System: 'PowerEdge 2900 III', SN: 'XXXXXX1S', 2 GB ram (2 dimms), 2 logical drives, 4 physical drives
# vi /usr/local/etc/nagios/commands.cfg
# add these lines:
# 'check_openmanage' command definition
define command{
command_name check_openmanage
command_line $USER1$/check_openmanage -H $HOSTADDRESS$
}
Then edit the server's .cfg file to call the plugin:
define service{
use local-service ; Name of service
host_name my-server
service_description OpenManage Status
check_command check_openmanage
}
Then re-start nagios and wait till it polls, and see nice green output. Yay!
Show where Windows home directories are
In our AD, user's home directories are stored on various file servers. So when it's time to migrate them to a new file server, how do we determine who needs to get moved off the old one? ldapsearch to the rescue:
ldapsearch -x -LLL -E pr=2000/noprompt -h rov-dc -D Administrator@example.com -W -b 'cn=Users,dc=example,dc=com' -s sub homeDirectory | awk '$1 = /homeDirectory:/ {print $2}' | sort
Wednesday, October 6, 2010
Trend 10 fills disks
Trend Micro OfficeScan 10 offers us some compelling features, so we're upgrading from Trend 7 (stop laughing in the back there, we like old software!). We've found 2 significant gotchas:
Disk filler
We install the Trend server software (i.e. the part that farms out the new virus definitions to the clients on that network) in the default location - C:\Program Files\Trend Micro\ on our local file/print server and soon enough, it fills the entire C: progressively killing off services, until file and print services die, and the users start calling me. The culprit: C:\Program Files\Trend Micro\OfficeScan\PCCSRV\Apache2\logs\error.log has grown to 3 GB (yep, 3 gigabytes of error logs!). I'd love to tell you what was in that file, but Notepad won't open a file that big, and Wordpad wants to make a copy of it - on C:\temp I guess - before it opens it, making the problem even worse. So I just killed it, shrugged and moved on.Weird Word Wackiness
OK, this is clearly a corner case, but it happened to three machines on one network. Trend 10 on the clients (two of them w2k, one XP), they try to open a Word document from a Samba file share, and...can't do it. With OpenOffice instead of Word, it works. With the documents in question copied to a Windows 2003 file server, it works. With Trend reverted to V7, it works. So: Trend 10 + MS Word 2003 + Samba file server = bizarre errors.Wednesday, September 29, 2010
XMarks replacement
Sad news: XMarks is shutting down soon, which is a spew, since I found it really handy to sync my bookmarks across several computers and my iPhone.
Fortunately, Firefox Sync will sync my bookmarks (and history) across several computers all using Firefox. And the neato Firefox Home will enable me to access the bookmarks on my iPhone. So I'm happy-ish again.
Still a pity that XMarks is going away, though.
Fortunately, Firefox Sync will sync my bookmarks (and history) across several computers all using Firefox. And the neato Firefox Home will enable me to access the bookmarks on my iPhone. So I'm happy-ish again.
Still a pity that XMarks is going away, though.
Monday, September 27, 2010
Set DRAC network config remotely
So I had a DRAC at a remote site that refused to play nicey-nice - I was pretty sure the default gateway was set wrong so it was unable to get out to anything beyond its LAN. I was able to SSH to a Unix host on the same network, and from there I could ssh to the DRAC in question, and do the following magical incantation:
Those three sets of numbers are IP address, netmask and default gateway - setting the last one correctly... wait 10 seconds or so, and hey presto, DRAC plays nicey-nice with the network again.
A neat trick for when you can't get to the web UI.
racadm setniccfg -s 192.168.0.120 255.255.255.0 192.168.0.3
Those three sets of numbers are IP address, netmask and default gateway - setting the last one correctly... wait 10 seconds or so, and hey presto, DRAC plays nicey-nice with the network again.
A neat trick for when you can't get to the web UI.
Wednesday, September 8, 2010
Getting the track listing for a K3B project
This was harder than I expected. However...
Get your .k3b file, which is really a zip file:
and extract the only useful part:
That's an XML file that contains the project data. Then we feed it through an XSLT processor, using this stylesheet:
like this:
Get your .k3b file, which is really a zip file:
$ unzip -l can-u-pick-em.k3b
Archive: can-u-pick-em.k3b
Length Date Time Name
--------- ---------- ----- ----
17 2106-02-07 17:28 mimetype
10623 2106-02-07 17:28 maindata.xml
--------- -------
10640 2 files
and extract the only useful part:
unzip -p can-u-pick-em.k3b maindata.xml > tmp.xml
That's an XML file that contains the project data. Then we feed it through an XSLT processor, using this stylesheet:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet
version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns="http://www.w3.org/1999/xhtml">
<xsl:output method="text" encoding="UTF-8"/>
<xsl:template match="/k3b_audio_project">
<xsl:apply-templates select="contents">
</xsl:apply-templates>
</xsl:template>
<xsl:template match="contents">
<xsl:apply-templates select="track">
</xsl:apply-templates>
</xsl:template>
<xsl:template match="track">
<xsl:value-of select="cd-text/artist"/>
<xsl:text> - </xsl:text>
<xsl:value-of select="cd-text/title"/>
<xsl:text>
</xsl:text>
</xsl:template>
</xsl:stylesheet>
like this:
$ xsltproc stylesheet.xslt tmp.xml
Eleni Mandell - Pauline
Neko Case - Star Witness
etc
EDIT!
Another option which I tried initially was using dcop to interrogate K3B while it has the project open - there are tons of suggestions out there to do this, but I could never get it to work. It turns out that dcop doesn't work in KDE4, and has been replaced with DBUS. Ah, so here's some examples using DBUS to talk to amarok - so maybe this will work... though this suggests otherwise
Thursday, September 2, 2010
WSUS, GPO and OU, oh my
I've been wrestling with WSUS - for testing, I only want to apply auto updates to a couple of test victims... err, I mean systems. So I thought I'd create an OU for WSUS, and a sub-OU called test, then create in that a security group, add a couple of test computers to that group. Then apply a GPO to the test OU and hey presto, it would all work. Not so! But along the way I discovered some handy tools to find out why not:
gpupdate /force - force the group policy to update from the DC right now
gpresult - show the set of policies that apply to this computer (and user)
I finally ended up moving the computer's account to a new OU (where the GPO is applied) and it all came good. Annoying, but do-able. Now, to get it detected by the WSUS server:
wuauclt.exe wuauclt /ResetAuthorization /DetectNow - forces the Windows Update agent to trot off to the update server right away. Of course, it doesn't then show up until you manually refresh the view on the WSUS admin console - took me a while to realise that.
gpupdate /force - force the group policy to update from the DC right now
gpresult - show the set of policies that apply to this computer (and user)
I finally ended up moving the computer's account to a new OU (where the GPO is applied) and it all came good. Annoying, but do-able. Now, to get it detected by the WSUS server:
wuauclt.exe wuauclt /ResetAuthorization /DetectNow - forces the Windows Update agent to trot off to the update server right away. Of course, it doesn't then show up until you manually refresh the view on the WSUS admin console - took me a while to realise that.
Thursday, August 26, 2010
Upgrading KnowledgeTree - just kill me now
I hope the title doesn't give it away too much, but I'm not having the greatest of luck with upgrading our KnowledgeTree 3.5.2b (to evade this bug).
Even when I took the drastic step of reading the documentation, I haven't had much luck.
To get a 3.5.2b test server up and running, I did the following:
Now, to upgrade to 3.5.3:
{
session_unset();
loginFailed(_kt('Could not authenticate administrative user'));
return;
}
And hey presto, I can upgrade it! Then after the DB upgrade has finished, it goes to re-scan our plugins, POW! The wheels fall off once again:
Fatal error: Class 'KTFolderAction' not found in /opt/ktdms/knowledgeTree/plugins/WemagTreeBrowsePlugin/WemagTreeBrowsePlugin.php on line 34
Well colour me impressed. OK then...
rm -rf /opt/ktdms/knowledgeTree/plugins/WemagTreeBrowsePlugin/
And... now...
Fatal error: Call to undefined method PEAR_Error::getAuthenticator() in /opt/ktdms/knowledgeTree/lib/authentication/authenticationutil.inc.php on line 67
Arg. OK, maybe a restart:
/opt/ktdms/dmsctl.sh restart
Nope, after doing the setup/upgrade.php again, it started complaining about another Wemag plugin, so...
rm -rf /opt/ktdms/knowledgeTree/plugins/WemagSidebarManagement/
Do the dance again... and this time I can get to the login screen. The documents are there, so that's nice. Of course, the Wemag tree browse plugin is gone, which is okay, since I had to remove it.
But before we get too excited, when I log on as me, I initially get my usual "Philip Yarra" account... but once I try to manage anything I get the built-in Administrator account... and when I go to "DMS Administration" get I get "Permission denied" and "If you feel that this is incorrect, please report both the action and your username to a system administrator". I do feel that this is incorrect - hell yes, I'll call the sysadmin and... oh yeah, I am the System Administrator.
If I log in as the built in admin account, same deal - cannot administer ktree.
Overall, I'd call this a failure of an upgrade.
Even when I took the drastic step of reading the documentation, I haven't had much luck.
To get a 3.5.2b test server up and running, I did the following:
- Set up a VM and install Ubuntu server 8.04 on it
- download ktdms-oss-3.5.2b-linux-installer.bin from here and install it
- run ktree and make sure I can log in as default admin/admin
- stop ktree, and untar our existing backup over it (contain s a DB backup, documents, config.ini and plugins
- start just mysql (dmsctl.sh start mysql)
- cd /opt/ktdms/mysql/bin
- ./mysqladmin --socket=../tmp/mysql.sock -udmsadmin -p drop dms
- ./mysqladmin --socket=../tmp/mysql.sock -u dmsadmin -p create dms
- ./mysql -u dmsadmin -p dms < ../backup/backup.mysqldump
- chown nobody:root /opt/ktdms/Documents/
- /opt/ktdms/dmsctl.sh restart
Now, to upgrade to 3.5.3:
- Run ./ktdms-oss-3.5.3-linux-upgrade-installer.bin
- Once that's complete, browse to http://server-name
/setup/upgrade.php - this wants us to log in as an administrative user before it will complete the DB upgrade - which makes sense... BUT none of the admin logins work. Every single one gets the "Could not authenticate administrative user" error message. LDAP user accounts, builtin admin account.. even when I did an echo -n 'password' | md5sum and poked that into the DB manually - nada! Soooo..... - Edit /opt/ktdms/knowledgeTree/setup/upgrade.php and comment out lines 298 - 303:
{
session_unset();
loginFailed(_kt('Could not authenticate administrative user'));
return;
}
And hey presto, I can upgrade it! Then after the DB upgrade has finished, it goes to re-scan our plugins, POW! The wheels fall off once again:
Fatal error: Class 'KTFolderAction' not found in /opt/ktdms/knowledgeTree/plugins/WemagTreeBrowsePlugin/WemagTreeBrowsePlugin.php on line 34
Well colour me impressed. OK then...
rm -rf /opt/ktdms/knowledgeTree/plugins/WemagTreeBrowsePlugin/
And... now...
Fatal error: Call to undefined method PEAR_Error::getAuthenticator() in /opt/ktdms/knowledgeTree/lib/authentication/authenticationutil.inc.php on line 67
Arg. OK, maybe a restart:
/opt/ktdms/dmsctl.sh restart
Nope, after doing the setup/upgrade.php again, it started complaining about another Wemag plugin, so...
rm -rf /opt/ktdms/knowledgeTree/plugins/WemagSidebarManagement/
Do the dance again... and this time I can get to the login screen. The documents are there, so that's nice. Of course, the Wemag tree browse plugin is gone, which is okay, since I had to remove it.
But before we get too excited, when I log on as me, I initially get my usual "Philip Yarra" account... but once I try to manage anything I get the built-in Administrator account... and when I go to "DMS Administration" get I get "Permission denied" and "If you feel that this is incorrect, please report both the action and your username to a system administrator". I do feel that this is incorrect - hell yes, I'll call the sysadmin and... oh yeah, I am the System Administrator.
If I log in as the built in admin account, same deal - cannot administer ktree.
Overall, I'd call this a failure of an upgrade.
Monday, August 2, 2010
Solaris mounting Linux NFS shares: nfs mount: mount: /mount_point: Not owner
In a supreme example of why programmers can sometimes write error messages that make prefect sense to them, but are absolute gibberish to everyone else, and don't really help isolate the problem, I present today's bafflement:
Well, yes, I am the owner, thanks for asking. The real cause? Solaris 10 NFS defaults to using NFSv4, and Linux doesn't support it properly (or so the story goes). The solution is real simple: use NFSv3:
bash-3.00# mount -F nfs 192.168.2.248:/c/prc /Backup_PRC/
nfs mount: mount: /Backup_PRC: Not owner
bash-3.00# ls -ld /Backup_PRC/
drwxr-xr-x 2 root root 512 Aug 3 10:10 /Backup_PRC/
Well, yes, I am the owner, thanks for asking. The real cause? Solaris 10 NFS defaults to using NFSv4, and Linux doesn't support it properly (or so the story goes). The solution is real simple: use NFSv3:
bash-3.00# mount -F nfs -o vers=3 192.168.2.248:/c/prc /Backup_PRC/
bash-3.00# mount | grep Back
/Backup_PRC on 192.168.2.248:/c/prc remote/read/write/setuid/devices/vers=3/soft/bg/xattr/dev=4a80006 on Tue Aug 3 10:14:35 2010
Thursday, July 29, 2010
Determining citrix client versions
So we need to know what Citrix client versions are connecting to our apps. Turns out to be quite do-able:
In Citrix Access Management Console, I select the "Servers" node in our farm, and change the view to "Users". Then select "Choose Columns" and select to display "Client Build Number" - here's one I prepared earlier:
You can them use this handy chart to determine which actual version (9, 10, whatever) these numbers represent. What could be simpler?
Friday, July 2, 2010
Dell PE1950 DRAC console redirect error
This afternoon when I tried to use the DRAC ActiveX console redirect doodad to log into the console of one of our PE servers, I got a rather rude error message: "Login failed: channel in use"
The answer suggested at the end of this article worked for me - delete the downloaded ActiveX control. In my case, on a windows 2000 client, the file was C:\WINNT\Downloaded Program Files\Session Viewer - deleted it, deletion failed, waited 20 seconds, tried again, deleted okay, restart IE, and I can use the remote console again.
The answer suggested at the end of this article worked for me - delete the downloaded ActiveX control. In my case, on a windows 2000 client, the file was C:\WINNT\Downloaded Program Files\Session Viewer - deleted it, deletion failed, waited 20 seconds, tried again, deleted okay, restart IE, and I can use the remote console again.
Thursday, May 27, 2010
VirtualBox, OpenBSD and VT-x extensions
So OpenBSD requires VT-x extensions, which VirtualBox can happily pass through to it, so long as the underlying CPU has such extensions. My desktop doesn't, so on booting the VM, it segfaults everywhere. Someone here had a neato solution:
For me, problem solved!
# get the UID of the OpenBSD VM
VBoxManage list vms
# now start that with no raw IO
VBoxSDL -norawr0 -vm UID
For me, problem solved!
Making an OpenBSD Bridge
There's no shortage of documentation showing how to set up a bridge on OpenBSD, but it all assumes you're using a full-blown OpenBSD, where you edit some files in /etc/rc.* and the rc.conf automagically takes care of it for you. When you're using the cut-down environment on a flashdist OpenBSD, none of this magic is done for you - all config is done in a simple /etc/rc file, which is just a shell script. So: what are the *manual* commands to summon a bridge into existence?
The answer is:
The answer is:
# create the bridge interface
ifconfig bridge0 create
# add the network interfaces that are part of the bridge
# these interfaces were already configured to be UP
brconfig bridge0 add vr1
brconfig bridge0 add vr2
# most important: bring the bridge interface up too
# it took me a while to figure this out
ifconfig bridge0 up
Sunday, May 23, 2010
Net5501 serial console settings
It's been a while since I poked around the net5501 serial console. Setting up a new one today, I realised I'd forgotten the settings, which are: 19200 8N1, all flow control off
Tuesday, March 30, 2010
NET START and listing the service names
On Windows, I'd long known you could short-cut a trip to start->control panel->admin tools ->services, scroll through list of services, right click and restart by simply using cmd.exe and typing 'net start service-name'. What I didn't know was how you could enumerate the service names (a good example: different VNC servers might be called wvnc, vnc-server, and so on).
The answer: type 'net start' by itself to list them. Of course!
It's not exactly logical (I was thinking with net start and net stop, there might be a net list command but no, net start by itself lists them).
Oh, and if the service name has a space in it, you have to put quotes around the service name:
The answer: type 'net start' by itself to list them. Of course!
It's not exactly logical (I was thinking with net start and net stop, there might be a net list command but no, net start by itself lists them).
Oh, and if the service name has a space in it, you have to put quotes around the service name:
net stop "VNC Server"
net start "VNC Server"
Monday, March 29, 2010
Making Citrix appear in the Applications menu for Windowmaker on Ubuntu 9.10
I've gone back to WindowMaker, and am trying to make it Do The Right Things - specifically, display the level of functionality I had under KDE. So far it's all going pretty well. One niggle is that the system menus provided by the default installation don't play nice with the wmprefs tool. Wmprefs generates a prop list format, but the output of updates-menus is not in that format - presumably in the older one.
Anyway, what does it mean? Basically, that you can't manually edit the menus easily. Bit of a pity, but I can live with it. However... where's Citrix gone? It's not in the menus. To fix, add this:
in a file called /usr/share/menu/wfcmgr and then run (as root) update-menus and hey presto, Citrix ICA manager appears in the Applications/Network menu. Magic!
Anyway, what does it mean? Basically, that you can't manually edit the menus easily. Bit of a pity, but I can live with it. However... where's Citrix gone? It's not in the menus. To fix, add this:
?package(icaclient):needs="X11" section="Applications/Network"\
title="Citrix ICA Manager" command="/usr/lib/ICAClient/wfcmgr" sort="$" \
icon="/usr/lib/ICAClient/icons/manager.png"\
hints="Citrix"
in a file called /usr/share/menu/wfcmgr and then run (as root) update-menus and hey presto, Citrix ICA manager appears in the Applications/Network menu. Magic!
Thursday, March 25, 2010
SafetyNet5 SNMP variables
Of interest to no-one but me. Here are the OIDs to get sensor info from the SS5:
What sort of sensor is it?
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.1.1.0
SNMPv2-SMI::enterprises.14748.1.7.1.1.1.0 = INTEGER: 4
(smoke, according to the MIB - this is a bit wacked)
What's the sensor's label?
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.1.2.0
SNMPv2-SMI::enterprises.14748.1.7.1.1.2.0 = STRING: "Temp on desk"
What is the sensor's value?
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.1.3.0
SNMPv2-SMI::enterprises.14748.1.7.1.1.3.0 = INTEGER: 286
Degrees centigrade multiplied by 10 - so 28.6 degrees
Next sensor:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.2.1.0
SNMPv2-SMI::enterprises.14748.1.7.1.2.1.0 = INTEGER: 1
Type is humidity - this agrees with the MIB
Label:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.2.2.0
SNMPv2-SMI::enterprises.14748.1.7.1.2.2.0 = STRING: "Humidity on desk"
value:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.2.3.0
SNMPv2-SMI::enterprises.14748.1.7.1.2.3.0 = INTEGER: 446
Percentage multiplied by 10 - so 44.6%
And next sensor:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.3.1.0
SNMPv2-SMI::enterprises.14748.1.7.1.3.1.0 = INTEGER: 7
Zoned security? Um, no, it's mains fail
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.3.2.0
SNMPv2-SMI::enterprises.14748.1.7.1.3.2.0 = STRING: "East Wall Power"
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.3.3.0
SNMPv2-SMI::enterprises.14748.1.7.1.3.3.0 = INTEGER: 0
0 indicates no alarm - that is, power is on
Full MIB here
What sort of sensor is it?
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.1.1.0
SNMPv2-SMI::enterprises.14748.1.7.1.1.1.0 = INTEGER: 4
(smoke, according to the MIB - this is a bit wacked)
What's the sensor's label?
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.1.2.0
SNMPv2-SMI::enterprises.14748.1.7.1.1.2.0 = STRING: "Temp on desk"
What is the sensor's value?
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.1.3.0
SNMPv2-SMI::enterprises.14748.1.7.1.1.3.0 = INTEGER: 286
Degrees centigrade multiplied by 10 - so 28.6 degrees
Next sensor:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.2.1.0
SNMPv2-SMI::enterprises.14748.1.7.1.2.1.0 = INTEGER: 1
Type is humidity - this agrees with the MIB
Label:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.2.2.0
SNMPv2-SMI::enterprises.14748.1.7.1.2.2.0 = STRING: "Humidity on desk"
value:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.2.3.0
SNMPv2-SMI::enterprises.14748.1.7.1.2.3.0 = INTEGER: 446
Percentage multiplied by 10 - so 44.6%
And next sensor:
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.3.1.0
SNMPv2-SMI::enterprises.14748.1.7.1.3.1.0 = INTEGER: 7
Zoned security? Um, no, it's mains fail
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.3.2.0
SNMPv2-SMI::enterprises.14748.1.7.1.3.2.0 = STRING: "East Wall Power"
$ snmpget -c ro -v1 ss5 1.3.6.1.4.1.14748.1.7.1.3.3.0
SNMPv2-SMI::enterprises.14748.1.7.1.3.3.0 = INTEGER: 0
0 indicates no alarm - that is, power is on
Full MIB here
Monday, March 22, 2010
Making circles in Gimp
Sometimes, I take screenshots, and I like to circle relevant bits of info. The screenshot is easy, using ksnapshot, save it as a PNG file, then open it in Gimp. That's where the fun starts - how do you make a circle (or ellipse) in Gimp?
The answer is: make a new layer, select that layer, on it make an elliptical selection, then fill the ellipse with whatever colour you want to make the line. Then shrink the selection (by, say, 3px) then cut, to remove the centre of the selection. Hey pesto! A neat elliptical line.
The answer is: make a new layer, select that layer, on it make an elliptical selection, then fill the ellipse with whatever colour you want to make the line. Then shrink the selection (by, say, 3px) then cut, to remove the centre of the selection. Hey pesto! A neat elliptical line.
Tuesday, March 9, 2010
tcpdump example
Between uses, I generally forget how to do this.
tcpdump -i vr0 udp port 500 and host foo.example.com
-i to select the network interface
udp port 500 is kinda self-explanatory
host foo.example.com - just traffic where foo.example.com is dst or src
tcpdump -i vr0 udp port 500 and host foo.example.com
-i to select the network interface
udp port 500 is kinda self-explanatory
host foo.example.com - just traffic where foo.example.com is dst or src
Wednesday, March 3, 2010
Firefox on Kubuntu, and opening files
Yeesh, using Firefox on Kubuntu sucks almost as much as not using Firefox on Kubuntu. It's a nice browser, but I download a file, then in the download list, I try to open it, and it says "Oh, what would I open a PDF file with?" and you have to walk through finding acoread, and saying "Use this, and please remember for next time". Then you download another file type - maybe a Word doc - and you get to repeat the whole process for that file type, and so on till you could scream! Then Firefox would probably ask "And what should I use to open a .scream file?"
It's not Firefox's fault, really - there seems to be no uniform way to do this across desktop environments - witness the havoc that ensues when otherwise intelligent people start discussing this issue.
Anyway, buried waaaay down on that page is a suggestion to use xdg-open - essentially a tool that uses your desktop manager's native preferences to decide how to open a file. It's not a perfect solution, but by golly it works! For a minimal config, just use xdg-open to open the containing folder, then rely on your native file manager to provide app preferences for opening the contained files.
It's not Firefox's fault, really - there seems to be no uniform way to do this across desktop environments - witness the havoc that ensues when otherwise intelligent people start discussing this issue.
Anyway, buried waaaay down on that page is a suggestion to use xdg-open - essentially a tool that uses your desktop manager's native preferences to decide how to open a file. It's not a perfect solution, but by golly it works! For a minimal config, just use xdg-open to open the containing folder, then rely on your native file manager to provide app preferences for opening the contained files.
VirtualBox, cloning machines, eth0 gone!
I decided to start using virtual machines for my testing. VirtualBox OSE (VB for short) is included with Kubuntu, and so long as I use the version that comes with 9.10 it will work for an Ubuntu 8.04 guest (the version of VirtualBox included with Kubuntu 8.04 lacks the PAE emulation, which an 8.04 guest requires).
Aaaaaanyway, having set up an Ubuntu 8.04 server "vanilla" machine, I've cloned it (well, used the VB GUI to export the appliance, then imported it as a new one). Everything works well, and VB even remembers to assign a new MAC address to the new machines, except... when the new machine boots, there's no eth0.
Turns out Debian (and its derivatives) use udev rules to map the MAC address of a network card to its device name. So my new machine boots (with a new MAC as supplied by VB) and the udev rules that would then automagically map the MAC address to the eth0 device no longer work, and... no eth0.
There's an easy solution for Debian etch, documented in about four-kazillion places on the internet, for example here. Sadly, the udev rule file that writes the rule (eh?) to map MAC to device is more primitive on Ubuntu 8.04, so that's not going to work.
Now for the good news: there's an even simpler way to achieve this, as documented here: simply delete /etc/udev/rules.d/70-persistent-net.rules and reboot. Problem solved!
Aaaaaanyway, having set up an Ubuntu 8.04 server "vanilla" machine, I've cloned it (well, used the VB GUI to export the appliance, then imported it as a new one). Everything works well, and VB even remembers to assign a new MAC address to the new machines, except... when the new machine boots, there's no eth0.
Turns out Debian (and its derivatives) use udev rules to map the MAC address of a network card to its device name. So my new machine boots (with a new MAC as supplied by VB) and the udev rules that would then automagically map the MAC address to the eth0 device no longer work, and... no eth0.
There's an easy solution for Debian etch, documented in about four-kazillion places on the internet, for example here. Sadly, the udev rule file that writes the rule (eh?) to map MAC to device is more primitive on Ubuntu 8.04, so that's not going to work.
Now for the good news: there's an even simpler way to achieve this, as documented here: simply delete /etc/udev/rules.d/70-persistent-net.rules and reboot. Problem solved!
Tuesday, February 23, 2010
Thunderbird, LDAP address books, borked
So we use an LDAP server to provide (among other things) a company-wide address book. And it's really useful. Yay for the good guys! People using Thunderbird point to it, people using webmail point to it, and we all are one big happy family.
Of course, then I got thinking it'd be really neat if people's personal address books could also go into LDAP, you know, so that the same addresses were accessible regardless of them using Thunderbird, webmail, whatever. For me, it'd be super-handy, since I use lots of different computers - I could just add my contacts from wherever I am, and there they are!
Squirrelmail can do this: this is on the list to try soon.
So obviously Thunderbird can too, right? Right? Guys??
Actually, it seems that it cannot, or if it could, you would not be able to edit the entries from Thunderbird's Address Book. Um, well surely I'm not the first person to want to do this... and if you read the bug report for this item, you can read the almost-decade-long history of how OSS projects can just... not quite deliver on the promise, sometimes. I mean, c'mon guys, a DECADE?
The obvious response to this whinge is: if you want it so much, start coding!
Maybe I can live without it :-)
Of course, then I got thinking it'd be really neat if people's personal address books could also go into LDAP, you know, so that the same addresses were accessible regardless of them using Thunderbird, webmail, whatever. For me, it'd be super-handy, since I use lots of different computers - I could just add my contacts from wherever I am, and there they are!
Squirrelmail can do this: this is on the list to try soon.
So obviously Thunderbird can too, right? Right? Guys??
Actually, it seems that it cannot, or if it could, you would not be able to edit the entries from Thunderbird's Address Book. Um, well surely I'm not the first person to want to do this... and if you read the bug report for this item, you can read the almost-decade-long history of how OSS projects can just... not quite deliver on the promise, sometimes. I mean, c'mon guys, a DECADE?
The obvious response to this whinge is: if you want it so much, start coding!
Maybe I can live without it :-)
Sunday, February 14, 2010
bash and floating point numbers
In short: bash does not do arithmetic on floating point numbers. If I was doing anything serious with these numbers, I guess it'd be time to re-do this in perl.
But in this case, all I want to do is see if someone is using more than 90% of their quota, so I'm happy to round down to the nearest integer. Witness my hack (where PC is the percentage of quota used with 2 decimal places, and IPC is the integer representation of this after rounding down):
Ugly, but it works.
But in this case, all I want to do is see if someone is using more than 90% of their quota, so I'm happy to round down to the nearest integer. Witness my hack (where PC is the percentage of quota used with 2 decimal places, and IPC is the integer representation of this after rounding down):
IPC=$(echo "$PC /1" | bc)
if (( IPC > WARN_THRESH ))
then
echo "WARN $USER they are using $PC % of their $QUOTA quota"
fi
Ugly, but it works.
Thursday, February 11, 2010
SquirrelMail LDAP address books
Another day, another interesting bug. A user noticed that in the SquirrelMail LDAP address book, when he listed all users, one person was missing. When you search for the person by name, they do appear. Strange...
ldapsearch shows that person in the list, so it was clearly a SquirrelMail issue. In fact, it turned out that only 250 people were listed by "List All" - a curiously precise limitation. So I grepped for 250 in config.php - nothing! Then I grepped for 250 in SquirrelMail source code, and found this:
If you don't define maxrows for an LDAP server, it gets this default. Easy solution: edit config.php and add this line:
And hey presto, there he is when you list all.
I guess this is one way to know when your company is growing.
ldapsearch shows that person in the list, so it was clearly a SquirrelMail issue. In fact, it turned out that only 250 people were listed by "List All" - a curiously precise limitation. So I grepped for 250 in config.php - nothing! Then I grepped for 250 in SquirrelMail source code, and found this:
functions/abook_ldap_server.php: var $maxrows = 250;
If you don't define maxrows for an LDAP server, it gets this default. Easy solution: edit config.php and add this line:
'maxrows' => 400,
And hey presto, there he is when you list all.
I guess this is one way to know when your company is growing.
Tuesday, February 9, 2010
Dell PowerConnect MIBs
Hey, who doesn't like polling Dell switches for useful information? Amiright?
Anyway, here's where the MIBs are:
ftp://ftp.dell.com/network
Useful-looking MIBs are:
PC_3024-3048-5012_v604_MIBs.zip
PC_3324_MIBs.zip
PC_3348_MIBs.zip
PC_6024FMIBs_v2001.zip
PC_6024F_MIBs.zip
PC_6024MIBs_v2001.zip
PC_6024_MIBs.zip
PC_62xxMIBS_v10027.zip
PC_62xxMIBS_v20012.zip
PowerConncect5324_MIBs.zip
PowerConnect34xx_MIBs_A01.zip
PowerConnect54xx_MIBs_A00.zip
PowerConnect_35XX_MIBs_A00.zip
The one I was after was the temperature for a 3448, which is 1.3.6.1.4.1.674.10895.5000.2.89.53.15.1.9.1
Anyway, here's where the MIBs are:
ftp://ftp.dell.com/network
Useful-looking MIBs are:
PC_3024-3048-5012_v604_MIBs.zip
PC_3324_MIBs.zip
PC_3348_MIBs.zip
PC_6024FMIBs_v2001.zip
PC_6024F_MIBs.zip
PC_6024MIBs_v2001.zip
PC_6024_MIBs.zip
PC_62xxMIBS_v10027.zip
PC_62xxMIBS_v20012.zip
PowerConncect5324_MIBs.zip
PowerConnect34xx_MIBs_A01.zip
PowerConnect54xx_MIBs_A00.zip
PowerConnect_35XX_MIBs_A00.zip
The one I was after was the temperature for a 3448, which is 1.3.6.1.4.1.674.10895.5000.2.89.53.15.1.9.1
Wednesday, February 3, 2010
X2X
X2X is a nifty tool that allows the mouse and keyboard on one computer to control the X display of another (presumably nearby) computer... for example, my desktop (with nice logitech keyboard and my favourite mouse) controlling the X display on my laptop (a nice enough machine, but would anyone with human-sized hands voluntarily use a laptop keyboard and a glidepad?) In effect, I get two displays controlled by one keyboard and mouse - as I mouse off the right side of my desktop display, the mouse pointer pops onto the laptop.
Problem is, I keep forgetting the magical incantation. So for the following setup:
Desktop, on left
Laptop, on right, IP address 192.168.12.205
I do this:
There are other options out there for controlling multiple displays with the one keyboard and mouse... for example, Synergy can do more than 2 screens, and works on Windows and Mac as well as Unix. It's quite good, and I've used it in the past, but it needs to be installed, and you have to set up the config files, so it takes a bit more doing.
Problem is, I keep forgetting the magical incantation. So for the following setup:
Desktop, on left
Laptop, on right, IP address 192.168.12.205
I do this:
ssh -X 192.168.12.205 x2x -to :0 -east
There are other options out there for controlling multiple displays with the one keyboard and mouse... for example, Synergy can do more than 2 screens, and works on Windows and Mac as well as Unix. It's quite good, and I've used it in the past, but it needs to be installed, and you have to set up the config files, so it takes a bit more doing.
Tuesday, February 2, 2010
Chown and symlinks
Had a persistent problem with an rsync backup job trying to set file attributes on some symlink files, and failing, as it did not own the files. But I'd done a chown -R to recursively set ownsership on every file in that directory. Anyway, it turns out that chown operates on the target of the link, not the link itself, unless you use -h
Or, for bonus points:
chown -h username:usergroup some_symlink_file
Or, for bonus points:
find . -type l -exec chown -h foo:bar {} \;
Subscribe to:
Posts (Atom)