Tuesday, December 29, 2009

Mythbuntu 9.10 and Hauppage HVR-2200

I'm helping a friend set up his new MythTV Ubuntu 9.10 box with Hauppauge HVR-2200 card, and it's turned out to be not such smooth sailing.

First challenge is getting the Hauppauge tuner card working with Linux - these are the instructions that worked for us however rather than "make menuconfig" you have to do "sudo make menuconfig" to avoid permissions problems.

Then the issue of the mythtv-backend not starting on boot, fixed by running:

sudo update-rc.d mythtv-backend defaults 50 51

Mapping channels described here

Channels 9 and SBS use MPEG, not DVB, so whrn you add the channels, add them as MPEG, and they (mainly) work. Glenn reports that there are some issues with HD, but if you go to SD first, it then finds the HD

Monday, December 21, 2009

Bash, and matching dot-files with wildcards

Live and learn! For years, I've used Linux, and never known how to get * to match all files (by "all" I mean including files that start with a dot).

For example:

# du -ks * | sort -n
4 courierimapsubscribed
4 tmp
20 new
148 courierimapuiddb
348 courierimapkeywords
163356 cur
# du -ks .* | sort -n
388 .Trash
456 .Drafts
664 .ldap
58844 .2007
77644 .Sent
97852 .2008
450348 .
450364 ..

It finally irritated me enough to find out:

# shopt -s dotglob
# du -ks * | sort -n
4 courierimapsubscribed
4 tmp
20 new
148 courierimapuiddb
348 courierimapkeywords
388 .Trash
456 .Drafts
664 .ldap
58844 .2007
77644 .Sent
97852 .2008
163356 cur

So: shopt -s dotglob to turn it on, and shopt -u dotglob to turn it off again.

Thursday, December 17, 2009

Nagios, nsclient and nsclient++

Installed the latest NSClient++ on a Windows 2000 box, added definitions to monitor it in Nagios, re-loaded nagios, and hey presto... oh wait, lots of red bits. Why? Well, Nagios couldn't connect using the check_nt command. It works fine for all the other Windows servers.

Turns out the latest version uses port 12489 by default, which isn't what check_nt (on our version of Nagios: Version 2.12) is expecting. So followed a longish process to find out what port Nagios was expecting.

The answer is: 1248

So I edited nsc.ini, set port to 1248, re-started the service and suddenly, we're all happy.

Another little amble down the road of "huh?"

Tuesday, December 8, 2009

Mozilla + google + squid + pubmed == pain

One of my users reported the following issue: he goes to google, types "pubmed" and clicks the first link in the search results, which is in fact for PubMed (http://www.ncbi.nlm.nih.gov/pubmed/). He then gets this error:

ERROR: 404 Not Found
NCBI C++ Exception:
Info: CGI(CCgiRequestException::Unexpected or inconsistent HTTP request) "/export/home/miller/PORTAL/2.7/src/cgi/cgiapp.cpp", line 1056: --- Prefetch is not allowed for CGIs
Error: WEB(CCgiException::eInvalid) "/export/home/miller/PORTAL/2.7/src/internal/portal/web/papp.cpp",
line 82: --- OnExceptionURL is not set

The cause turns out to be the confluence of the following:

1. Firefox already implements a soon-to-be-standard HTML feature called
pre-fetching: a page can provide a series of hints about the next page
the user is likely to click to, and provide some links to resources for
pre-fetching. It's supposed to make the load time shorter. More info

2. Google now provide pre-fetch hints for the top links on the search
results. View the source of your search for pubmed, and you'll see this:

<link rel=prefetch href="http://www.ncbi.nlm.nih.gov/pubmed/">

3. PubMed clearly don't like people pre-fetching their site, and have
taken some fairly heavy-handed tactics to combat it:
see the source here
You can see they're checking for the x-moz: prefetch header
and returning HTTP status 403, with no pragma to prevent a proxy server
from caching that response. Then you click the link, and get the cached
version from the proxy with the error message. This is why shift-reload
works - it's forcing the proxy to go get it again, and since there's no
prefetch header, this time it works.

There are a couple of ways to avoid this, here they are in my favoured
order of preference:

1. PubMed find a better way to avoid prefetches on their CGIs (e.g.
either explicitly set pragma to prevent caching by proxies, or use an
HTTP 503)

2. our users get to pubmed via a bookmark

3. you can disable the firefox pre-fetch mechanism, but that's per-user,
per computer - adds a lot of overhead to IT which frankly, I could live

Sunday, October 11, 2009


This is the command to generate a password line for rootdn in slapd.conf. I consistently forget this fact, so I'm noting it here for future reference.

That is all.

Wednesday, August 26, 2009

CUPS, old HP printer and multiple copies

Now and then, I like to print multiple copies of something. Like, 90 copies at a time. This should be a snap, but some combination of my old HP LaserJet 4000 and CUPS prevents it working. Doesn't matter how nicely I ask, I always get 1 copy, no more, no less. Finally found the answer:

lp -n 90 -o Collate=True -d HPLaserJet my_file.ps

Just one more thing to keep me banging my head against the desk!

Tuesday, July 21, 2009


Walk the SNMP tree for Bridge STP info:

snmpwalk -c community -v 1 ip-address BRIDGE-MIB::dot1dStp

Get the bridge's current root:

snmpget -c community -v 1 ip-address BRIDGE-MIB::dot1dStpDesignatedRoot.0

Get this bridge's ID:

snmpget -c community -v 1 ip-address BRIDGE-MIB::dot1dBaseBridgeAddress.0

Saturday, March 28, 2009

Printing an A5 booklet

Yup, a break from LDAP :-) Instead I got to help with the kinder newsletter. They wanted it done as an A5 booklet (you know, printed 2 pages per sheet of A4 then folder over). After wrestling OpenOffice (I tried setting an A4 page in landscape and dividing it into 2 - it was verrry hard, and ended up not working too well anyway) I discovered someone had already solved the problem far more elegantly than I could.

This person has the best summary:

$ psbook print.ps out.ps
$ psnup -2 out.ps > out2up.ps

I had a lot of trouble convincing the stupid printer software on Linux to multiple copies to my HP LaserJet (that's an annoyance to track down another day) and I also have no duplex unit, so I ended up printing pages 1 and 3, then flipping those over, putting them back in the feed bin, and printing pages 2 and 4 on the other side of that. I split pages 1 & 3 and 2 & 4 by opening the postscript, then printing just page 1 & 3 to a new postscript file, then same again for 2 & 4. Then printed with:

$ lp -n 70 -d HPLaserJet -o Collate=True pages1and3.ps
$ lpr -n 70 -d HPLaserJet -o Collate=True pages2and4.ps

Thursday, March 26, 2009

LDAP, uidNumber and ordering

Agh, another day, another LDAP thing that makes me unhappy.

Since cracking the trick to getting a unique uidNumber (see yesterday's installment the system has really come along well. I can actually add users now, from the web interface and all. Such modern luxuries!

In fact, I've added quite a lot of users. Now I want to remove my test ones. I know the uidNumber of the last real user I added, so it should be easy to get all users with a higher uidNumber than that, and delete them. Easy, right?


You see, I read the RFC and thought since it says you can do >= to match all integers greater-or-equal, I could actually do that. So I did this:

ldapsearch -L -x '(uidNumber>=10220)'

And the response I got was:
# search result
Additional information: inappropriate matching request

Hmmm... so time to check the schema, in case uidNumber isn't defined as a number... but it is:

# builtin
#attributetype ( NAME 'uidNumber'
# DESC 'An integer uniquely identifying a user in an administrative domain'
# EQUALITY integerMatch

Yes, is INTEGER.

Turns out the issue is a lack of ORDERING clause - so I can do an equals-match on a uidNumber, but nothing that involves ordering. Some bright sparks have modified the nis.schema like this:

attributetype ( NAME 'uidNumber'
DESC 'An integer uniquely identifying a user in an administrative domain'
EQUALITY integerMatch
ORDERING integerOrderingMatch

and it's all just worked for them. However, sadly for me, this attribute type is now built in to openLDAP, so modifying my nis.schema to add the ORDERING clause and un-comment those lines defining the attribute get me a slapd that refuses to start due to a duplicate attribute definition.


The solution appears to be to add another attribute of one's own that is basically a duplicate of uidNumber, but with an ORDERING rule added. Yeesh!

Or if you prefer some unix hackery:

ldapsearch -LLL -x '(objectClass=posixaccount)' | awk -F: '$1 ~ /dn/ {printf("%s ",$2)}; $1 ~ /uidNumber/ {printf("%s\n",$2)}' | sort -nk2 | awk '$2 > 10214 {print $1}' | ldapdelete -x -W -D 'uid=pyarra,ou=People,dc=example,dc=com,dc=au'

There might be a better way than this :-)

Wednesday, March 25, 2009

LDAP, JNDI, posixAccount and Unique UID numbers, oh my!

If you aren't interested in LDAP, or don't know what it is, stop reading now and save yourself the pain. I'm putting this here so I can refer back to it later. And who knows, maybe it will help someone else.

I'm writing a user management system in LDAP. It is pretty simple: allows non-technical users to add new accounts for the mail server, edit some details of existing accounts, a few other management functions. Simple stuff... well, it ought to be, anyway, but some bits are waaaay harder than I expected.

Here's an example: each new user who is added is also a posixAccount - this means they need a unique uidNumber. For example, bbuilder has uidNumber 664, sfireman has uidNumber 665, and so on. Two users having the same UID would be A Bad Thing. If this was using a database, the problem is easily solved:

CREATE TABLE users (uidNumber DEFAULT nextval('nextUidNumber'), ... );

For good measure, enforce uniqueness with a unique index on users(uidNumber).

LDAP makes this kind of thing rather harder.

First, you need a single-valued attribute to hold your next UID number. Amazingly, I cannot find a pre-existing schema for this - surely it's the kind of thing people need to do regularly. So I had to create my own schema. First, I had to get a Private Enterprise Number (PEN) assigned from IANA so my schema would be in its own space (known as an arc in LDAP and SNMP speak).

Then I set to writing a simple schema. The OpenLDAP doco was excellent on this topic. Nevertheless it took me a few goes to get it working to my satisfaction.

The schema I came up with looks like this:

objectIdentifier ROVOID<PEN> # not actually our PEN, I'm still waiting for it
objectIdentifier ROVSNMP ROVOID:1
objectIdentifier ROVLDAP ROVOID:2
objectIdentifier ROVLDAPATTR ROVLDAP:1

attributeType ( ROVLDAPATTR:1 NAME 'x-rov-nextUidNumber'
DESC 'A counter for storing next UID number for posixAccounts'
EQUALITY integerMatch

objectClass ( ROVLDAPOBJECT:1 NAME 'x-rov-UidCounter'
DESC 'objectClass containing the next UID counter'
MUST ( x-rov-nextUidNumber $ name ) )

This was created as file /etc/ldap/schema/local.schema, then added to the list of imported schemas in /etc/ldap/slapd.conf then re-start slapd.

Then create an instance of this objectClass (I've stashed ours in name=mailAccountUid,ou=IT,ou=mgmt - a place reserved for management info for the application). I cheated and used phpLdapAdmin, but the LDIF would look like this:

dn: name=mailAccountUid,ou=IT,ou=mgmt,dc=example,dc=com,dc=au
name: mailAccountUid
objectClass: x-rov-UidCounter
objectClass: top
x-rov-nextUidNumber: 1

You also want to set nextUidNumber to the start of the range you're using for your LDAP user accounts (you do not to want to start at 1 - typically, these would be used by system accounts: check the output of `getent passwd` to see which are already in use. I chose to start at a nice arbitrary 10,000:

dn: name=mailAccountUid,ou=IT,ou=mgmt,dc=example,dc=com,dc=au
changetype: modify
replace: x-rov-nextUidNumber
x-rov-nextUidNumber: 10000

Now when you want the next available UID, you can get the value of x-rov-nextUidNumber, increment by one, then write it back to the LDAP store. Of course, then you have the issue of concurrency - what happens if two users are added at the same time?

Various solutions have been suggested, but the one I have used (and tested - it works!) is to do an atomic modify operation consisting of delete and add for the attribute. It works because if you attempt to delete the attribute with the value you read, and it throws a NoSuchAttributeException, you know someone else messed with it while you were incrementing it and trying to write it back, so you just try again - get, increment, write it back. I've put this in a loop of 5 attempts. I think this is reasonable for a company with about 200 users. I'm working in java for this, but the same atomic modification should be available via other language APIs. In JNDI it is DirContext.modifyAttributes(String, ModificationItem[]), and you use it like this:

public static int getNextUidNumber(DirContext ctx) throws NamingException
String fnName = "getNextUidNumber: ";
Utils.debug(fnName + "start");
int retval = 0;
int numAttempts = 5; // how many times we'll try to atomically get next UID before giving up

Attributes matchAttrs = new BasicAttributes(true); // ignore attribute name case
matchAttrs.put(new BasicAttribute("objectClass", "x-rov-UidCounter"));
matchAttrs.put(new BasicAttribute("name", "mailAccountUid"));
String[] returnAttrs = { "x-rov-nextUidNumber" };
for (int i = 0; i < numAttempts; i++)
NamingEnumeration answer = ctx.search("ou=IT,ou=mgmt,dc=example,dc=com,dc=au", matchAttrs, returnAttrs);

SearchResult sr = (SearchResult) answer.next();
Attributes attrs = sr.getAttributes();
String thing = safeStringGet(attrs, "x-rov-nextUidNumber");
Utils.debug(fnName + "got answer [" + thing + "]");

retval = Integer.parseInt(thing);
int nextval = retval + 1;
Utils.debug(fnName + "replacing with [" + nextval + "]");
ModificationItem removeOld = new ModificationItem(DirContext.REMOVE_ATTRIBUTE, new BasicAttribute("x-rov-nextUidNumber", "" + retval));
ModificationItem addNew = new ModificationItem(DirContext.ADD_ATTRIBUTE, new BasicAttribute("x-rov-nextUidNumber", "" + nextval));
ModificationItem[] atomicReplace = {removeOld, addNew};
ctx.modifyAttributes("name=mailAccountUid,ou=IT,ou=mgmt,dc=example,dc=com,dc=au", atomicReplace);
return retval;
catch (NoSuchAttributeException nsaex)
Utils.info(fnName+"exception on atomic increment, trying again: " + nsaex);
throw new NamingException("Could not get next UID after " + numAttempts + " attempts");

For the sake of completeness, here is the definition for safeStringGet - a convenience function to avoid having to null-check the attribute before attempting to get its value.

private static String safeStringGet(Attributes attr, String attrName) throws NamingException
Attribute at = attr.get(attrName);
if (at != null)
return (String) at.get();
return null;

There you have it - a couple of days of work (coding, reading, coding some more, asking questions on LDAP lists, more reading, more coding) to achieve something that I think should have been easy.

My thanks go to Vincent Ryan for explaining how to use modifyAttributes() to achieve an atomic operation. He also warns that the atomicity only holds for single-master LDAP set-up - with multi-masters, this atomicity is not guaranteed. Since I'm not game to go within a million miles of a multi-master setup, and have no need to anyway, I'm noting this for the sake of completeness and moving on.

Thanks also to Francis Swasey who sent me Perl code to do the same in perl-ldap

my $umsg = $ld->modify($dn,
delete => { 'uidNumber' => $currentvalue },
add => { 'uidNumber' => $nextvalue});

(where $currentvalue is the value I just retrieved from my OpenLDAP server and $nextvalue is $currentvalue incremented by 1).