Thursday, December 29, 2005

Domains
I've always thought that domain squatting was an unethical way to make money, but had only heard stories of it before now.

Anne suggested that it might be nice to pick up the gearon.com domain, if it was available. After all, I normally take up a lot of the first result page at Google when you type in my surname. (Just looked, and today I don't! I really need to blog more often.) OK, so I'm not a commercial entity, but everyone recognizes .com, while people often look at me funny when I say .org or .net. The .com thing has brand recognition.

So I had a look, and discovered that the domain is already registered, but it is for sale. It turns out that it's available for purchase through the bidding process available at Afternic.com. The minimum bidding price was way more than I'd have liked to spend, but it's my name, so why not?

A week later I discovered the bid was rejected. I asked Afternic what a decent price is supposed to be (according to their market analysis). Their response was that the current market value is $200. Still far too much, but it gave me confidence to ask the current domain holder how much they would like.

The answer? $2950. Quoting from the email:

Price is very low for a family name.

Huh? Whose family? The Rockefellers?

I didn't care about the domain all that much (it should probably go to a more commercial interest, like something run by Michael Gearon or Tierney Gearon), but registering a name and then charging to give it back to an owner of that name is a principle I find rather offensive. I suppose I should be grateful she was asking for $3000 and not $30,000.

I resolved it by registering gearon.org for $8.20.

UIMA
The other day I followed a link over to IBM's DeveloperWorks, and found that they have an RSS feed for their tutorials. I was pleased to find a simple Python tutorial that I'm using to finally introduce myself to that language. But more importantly, I found a tutorial for generating a UIMA annotator.

The UIMA docs are very verbose, and a tutorial like this has been great for cutting through the chaff. It's still full of stuff I don't need (mostly because I've already learnt it from the official UIMA docs), but it's still been a real help.

My biggest problem at the moment is that UIMA wants all my annotations in character offsets. Unfortunately the library I'm using is providing my information in word offsets. That's trivial to convert when words are separated by whitespace, but punctuation leads to all sorts of unexpected things, particularly since the grammar parser treats some punctuation as individual words, while others get merged into existing words.

I'm starting to wonder if I need to re-implement the parser so I know what the character offsets of each word will be. Either that, or I'll be doing lots of inefficient string searching. I don't find either prospect enticing. Maybe if I sleep on it I'll come up with something else.

Wednesday, December 28, 2005

Gödels Theorem
I was just looking at the fascinating exhibit of equations by Justin Mullins. I'm not sure if I see the exhibit as art, since the visual appearance evokes little in people who do not understand the equations (with the possible exception of the Four Color Theorem), but they are certainly beautiful.

I particularly loved the end of the narrative for Gödel's theorem:

Others have wondered what Gödel’s theorem means for our understanding of the human mind. If our brains are machines that work in a consistent way, then Gödel’s theorem applies. Does that mean that it is possible to think of ideas that are true but be unable to prove them? Nobody knows.

Note the sentence that I highlighted. If it is true, then that sentence is an unprovable idea. I love it.

Saturday, December 17, 2005

Qubytes
Lots of places are commenting on the new quantum memory chips in silicon.

I'm surprised at this. I expected that embedding quantum devices in silicon would be done with quantum dots, rather than ion traps. It is probably better than it was done with ion traps as there seems to have been more research into quantum processes using this technology. After all, what good is a quantum state if you can't apply transformations on it without collapsing the state?

All the same, a chip like this is just a first step in a long line of problems to be solved. There is no discussion about setting up quantum states, nor reading them back. There is no discussion about the ability to entangle the qubits on the chip, and how far that will scale. Transformations will eventually have to be built in to the chip. But if research has taught me anything, it's that the big problems are usually solved by lots of people chipping away at the little problems. By the time the final solution comes around, it doesn't seem like a big deal any more.

Tracker
The little tracker icon I have on this page is a link to a service that tells me how many hits the blog is getting (but not the RSS feed). I haven't bothered to look at the stats in a long time. After all, I'm rarely writing, so why would anybody (beyond my friends) bother to read?

Apparently I was wrong in that assessment. I'm averaging over 20 hits a day, with peaks over 40, and I'm writing less that once a week. My infrequency is due to lack of time. Given how many people are reading so little here, I'm wondering if anyone else suffers the same problem. :-)

Thursday, December 15, 2005

Blogging
I notice a new post on the Google Blog describing a new tool for Firefox. When installed, a small message will appear on any page you visit, showing a list of blogs which refer to that page. Sounds cute.

I normally browse with Safari (gotta love those native widgets), but I keep Firefox installed (after all, some pages have extra features when viewed with Firefox). So here was a chance to upgrade my version of Firefox (I hadn't picked up 1.5 yet) and install Google's new tool.

So where should I go first to check out the comment? Well obviously my usual home page of Google comes up, and there are ample comments. How about the page talking about the new tool? Lots of comments there too. Oh, I know! How about this blog? :-)

Unfortunately the list of comments was a little disappointing, mostly including my friends. That will teach me to not blog regularly. However, I did find one blog on semantic web development that I found really interesting. I was just disappointed that he didn't have a lot of incoming links (though there was one worth checking out).

All in all, it's a tool that I like. In fact, I wouldn't mind a similar tool that did a link: search on Google, rather than just in the bloggosphere.

Tuesday, December 13, 2005

Modeling Talk
Last week I was invited along to a talk given by Bob at SAP. I enjoy seeing what Bob's working on when I'm not discussing OWL with him. He's a clever guy, and understands modeling quite well. I just wish I'd written about it sooner, as I won't be so clear anymore.

Probably the most important thing I got out of his talk was an overview of category theory. Andrae and Simon have both spoken about it, and I've come to understand that it's relevant, but as yet I haven't learnt anything about it. Bob gave the 30 second overview for computer scientists, which I found quite enlightening.

I finally got my copy of Types and Programming Languages (otherwise known as TAPL), and have been looking forward to reading it. But when ordering this book I discovered that Benjamin Pierce has also written a much smaller book called Basic Category Theory for Computer Scientists. I had considered getting this book (at only 117 pages, it looks like a relatively quick read), and Bob's talk has now convinced me. The only problem is that I'll have to put the order off until we move to the States... whenever that happens.

modeling
Speaking of books, I also picked up a copy of MDA Distilled, Principles of Model-Driven Architecture. Some of the work I did with SAP came out of this book (virtually guaranteed when you work with one of the authors), and it talks about the kind of dynamic modeling that I've been talking about investigating with OWL. I haven't been through all of it before now, so I thought it would be worthwhile reading it in detail.

Coincidentally, I was explaining some of my ideas to someone at work today (Indy), and referred to this book to describe some of the background. I had some idea that Herzum Software worked with MDA (which is why I thought they might be interested in this work), but I had never thought of it as a formal association. Indy quickly made it very clear that Herzum Software specifically put themselves out there as an MDA company. That makes perfect sense as it aligns with what I already knew, but being in my own little corner of the world has kept me isolated from the advertising of it. Anyway, it's nice to know that the direction I'm moving in is paralleled by the work of my new employer.

RDFS Entailment
I've also been in an email discussion about entailment on RDFS. It seems that the following statements:

  <camera:min> <rdfs:range> <xsd:float>
<_node301> <camera:min> '15.0'^^<xsd:float>
will lead to an entailment of:
  <xsd:float> <rdfs:subClassOf> <rdfs:Resource>
'15.0'^^<xsd:float> <rdf:type> <xsd:float>
'15.0'^^<xsd:float> <rdf:type> <rdfs:Resource>
It seems that I didn't cover all the possible rules which could lead to a literal in the subject position. It's quite annoying, as these are completely valid entailments, according to RDF semantics. Making special cases to avoid particular results seems like a hack.

In a similar way, it seems wrong to not allow entailments about blank nodes. I should re-visit the decision there. I think I need to re-read the semantics document to see if I can get further enlightenment. At the least, I know that I can't entail a statement with a blank node as the predicate. Like the problem with literals, the semantics document appears to justify this sort of statement, but the RDF syntax doesn't allow for it. I know this is a particular bugbear for Andrae.

Catch Up
I've been wanting to write for nearly a week now. Every time I try to sit down for it I've had a work task, family needs, or packing to take priority over this blog. I ended up having to write little notes to myself to remind me of what I wanted to blog about.

JNI and Linux
Having made the Link library work on Mac OSX using JNI, I figured it would be easy to get it working on Linux as well. Unfortunately it didn't work out that way.

To start with, I got an error from the JVM saying that it could not find a symbol called "main" when loading the library. This sounded a little like dlopen loading an incorrectly linked file. I'm guessing that the dlopen procedure found what it thought was an executable, and therefore expected to see a main method. Googling confirmed this, but didn't really help me work out the appropriate flags for linking to fix this.

I had compiled the modules for the library using -fPIC (position independent code). I then used a -Wl,-shared flag to tell gcc to pass a -shared flag to the linker, in order to link the modules into a shared library. However, it turned out that I really needed to just use -shared directly on gcc. I've still to work out what the exact difference is, but that's not a big priority for me at the moment, since I have it working. According to DavidM there is something in the gcc man page about this, so at least I know where to look.

After linking correctly, the test code promptly gave a Hotspot error, due to a sigsegv. This meant that there was a problem with the C code. This had me a little confused, as it had run perfectly on OSX. Compiling everything in C and putting it all in a single executable demonstrated that the code worked fine on Linux, so I started suspecting that the problem might be across the JNI interface. This ended up being wrong. :-)

There are not many differences between the two systems, with the exception of the endianess of the CPUs. However, after looking at the problem carefully, I could not see this being the problem.

The initial error included the following stack trace:

C [libc.so.6+0xb1960]
C [libc.so.6+0xb4fcb] regexec+0x5b
C [libc.so.6+0xd0a98] advance+0x48
C [liblink.so+0x19f9d] read_dictionary+0x29
C [liblink.so+0x1d705]
C [liblink.so+0x1d914] dictionary_create+0x19
C [liblink.so+0x286c9] Java_com_link_Dictionary_create+0xc1
The only code I had real control of was in Java_com_link_Dictionary_create, dictionary_create and read_dictionary. I started by looking in Java_com_link_Dictionary_create and printing the arguments, but everything looked fine. So then I went to the other end and looked in read_dictionary.

I was a little curious about how read_dictionary was calling advance, as I hadn't heard of this function before. Then I discovered that the function being called was from the Link library, and has a signature of advance(Dictionary). This didn't really make sense, as my reading of the stack trace above said that advance came from libc and not the Link library (liblink). This should have told me exactly what was happening, but instead I tried to justify what I was seeing. I convinced myself that the function name at the end of each line described the function that had called into that stack frame. In hindsight, it was a silly bit of reasoning. I was probably just tired.

So to track the problem down I start putting printf() statements through the code. The first thing that happened was that the hotspot errors changed, making the error appear a little later during execution. So that meant I had a stack smash. Obviously, one of the printf() invocations was leaving a parameter on the stack that helped the above stack trace avoid the sigsegv. OK, so now I'm getting some more info on the problem.

It all came together when I discovered that I was seeing output from just before read_dictionary() called advance(), and from just after it, but not from any of the code inside the advance() function. At that point I realised that the above stack trace didn't need a strange interpretation, and that the advance() that I was calling was coming from libc and not the local library.

Unfortunately, doing a "man advance" on my Linux system showed up nothing. Was I wrong about this method? I decided to go straight to the source, and did a "nm -D /lib/libc.so.6 | grep advance". Sure enough, I found the following:
  000b9220 W advance
So what was this function? Obviously something internal to libc. I could download the source, but that wasn't going to make a difference to the problem or the solution. I just had to avoid calling it.

My first approach was to change the function inside Link to advance_dict(). This worked perfectly, and showed that I'd found the problem. However, when the modules were all linked into a single executable it had all worked correctly, and had picked up the local function, rather than the one found in libc. Why not?

I decided that if I gave the compiler a hint that the method was local, then maybe that would be picked up by the linker. So rather than renaming the function to advance_dict(), I changed its signature from:
  int advance(Dictionary dict)
to:
  static int advance(Dictionary dict)
I didn't know that this would work, but it seemed reasonable, and certainly cleaner since it's always a bad idea to presume that your name is unique (as demonstrated already). Fortunately, this solution worked just fine.

DavidM explained to me that static makes a symbol local to a compilation unit (which I knew) and was effectively a separate namespace (which I also knew). He also explained that this "namespace" has the highest priority... which I didn't know, but had suspected. So I learned something new. David and I also learnt that libc on Linux has an undocumented symbol in it called advance. This is worth noting, given how common a name that is. As shown here, it is likely to cause problems on any shared library that might want to use that name.

There's more to write, but it's late, so I'll leave it for the morning.

Sunday, December 04, 2005

Blogging
I'm a little annoyed at myself for lack of blogging recently. This is particularly the case as I see mainstream media commenting on people's online presence more and more. I almost feel like I'm missing out on something. Yes, I know that's a ridiculous concern, but I'm allowed to worry about anything I want to. :-)

Other than the restrictions imposed on me by my recently expanded family, my main problem with blogging recently has been lack of material. I don't mean that I have nothing to say. Instead, I'm limited by what is appropriate to put into a public forum. It was much easier when I worked on Open Source software all the time.

For instance, this last week has had me reviewing software produced by a group of academics at another company. My review is for my employer, so I obviously can't publish it (otherwise, why would he be paying me?). Also, any review will naturally say both good and bad things. The good may be OK, but saying something bad in public is obviously inappropriate. After all, these guys are out to impress customers and make money too.

So I'm left having to write about what I do out of hours. That's all well and good, but having a young family reduces the time for that.

I could always write a few opinion pieces. Australian federal politics has had me feeling frustrated for some time now, and I definitely have things to say on the topic. But that's not what this particular blog is about. I could always start a parallel blog, but then, who would really want to know what I think about Brendan Nelson and higher education in Australia? It would be cathartic for me, but not so much that I think it's really worthwhile.

All the same, I might consider a second blog to contain random musings (like this one). Maybe one evening when I'm not feeling like going to bed, and I have something I feel I want to say. I could be a mix of my daily life, frustrations, and comments on the oft explored experience of fatherhood. I'm not sure it will be good reading, but I may have fun coming back to it in a few years to see just how naive I really was back in 2005. :-)

Grammar
Meanwhile, I'm back to grammar parsing, using Link. I was a little chuffed to get the JNI all working, particularly when I was able to rewrite some of the test code in Java and have it all run correctly. I still need to test that it runs fine on Linux, but I don't have any real concerns there. Making it run on Windows will be another story.

Ideally, I'll be able to use MingW as the compiler, as it should help keep the codebase and build process consistent. I just hope I won't have to jump through too many hoops to generate a DLL file.

I could always ask someone at work if we have an MS commercial compiler, but we may not. I have my own, but I'm notlicensedd to use it for work. It amazes me that people are concerned about the restrictions of Open Source licensing, when commercial licensing can be far worse.

Weather
I'm a little obsessed with the weather at the moment. I enjoy our sub-tropical climate here, and it's going to be a rude shock to land in Chicago in the middle of Winter. As a result, I'm enjoying every minute here that I can. I'm also comparing the weather between the two cities on a day-by-day basis. The huge difference fascinates me, but the guys at work are probably annoyed with me by now.

According to AccuWeather.com, Chicago is currently well below zero Celsius), and will be staying that way all week. The town I grew up in (Chinchilla) often goes below zero during Winter, but that only happens overnight. Chinchilla also hasn't had snow since the early 1900's (1915 rings a bell for some reason).
Brisbane has been my home for the last 17 years, and it has never been below freezing (at least, not in recorded history). So I really haven't experienced anything like Chicago before. Can you blame me for paying attention to the differences?

In the meantime, Brisbane is just starting on its first heat wave for the Summer. Fortunately, it's not supposed to get as high as 40C (104F) over the coming week, but it won't be far off. The prediction is 37C (99F). Not too bad, but unpleasant all the same. Overnight minimums are over 20C (68F), so Luc isn't sleeping too well. This is a far cry from Chicago, where the highest maximum for the coming week is -4C (24F).

This will certainly add some excitement to the move!

Thursday, December 01, 2005

Bytecodes
DavidM helped me to find a slew of bytecode libraries (many of which are here). Some of these are better than others, but I was surprised to discover that none of them work from the basis of an AST. That means a lot of work gluing an AST onto an appropriate bytecode library, which reduces the advantages of using third party libraries.

That leads me back to looking at an existing compiler, as these must already go from AST to bytecode. All I'm trying to achieve beyond this is the ability to persist and retrieve the AST in RDF, and a public API for modifying the AST. So maybe I should be going directly to a compiler like the one built into Eclipse?

Types and Computer Languages
One of DavidM's first suggestions to me was to use the expression library from Kawa. While it doesn't express an AST either, it does meet much of the criteria of what I'm looking for. However, it is really centered around providing support for Scheme in a JVM.

I don't really know Scheme, and my first thought was that this would be something else I'd rather avoid learning (there's so much to learn these days - I have to be discriminatory). However, Andrae was quick to point out that it's a type of Lisp, and that I already have the basics in the lectures on "Structure and Interpretation of Computer Programs" (which I have the video files for). He also pointed out that given the Lisp heritage of OWL that I'd do well to learn Scheme. So I decided to pull the lectures back out, dust them off, and watch them right through (I'd only seen the first 4 before). It's not always easy to find the time, but I have to say that I enjoy watching them.

Fortunately, I have a new iPod (yes, that involved some wrangling with Anne), so I've converted all the lectures over to it (courtesy of FFmpeg) and can watch it whenever I have a spare moment. It's just a shame that the battery can only handle a couple of hours of video.

While discussing languages with Andrae, he mentioned Types and Computer Languages by Benjamin Pierce. I've always been impressed with Andrae's knowledge of the theoretical underpinnings of languages, and had put it down to extensive reading on the topic. This book is apparently one of the central sources for the information, so I'm thinking I'd like to read it.

I went looking around the appropriate bookstores in Brisbane yesterday, and half of them did not even have access to a distributor for this book. When I finally found one, they told me it would take 6 weeks to come in from America, and would cost $185 AUD. Looking at Amazon, I'm able to purchase the book and its sequel for only about $165 AUD (and that's probably a similar shipping time). So I'd be buying the book from there, if it weren't for the fact that we're about to move! Would I ship the book here, to my in-laws near Melbourne (where I'll be working for a couple of weeks around Christmas), or to the office in Chicago?

The visa paperwork is very frustrating. It would be nice to know when I'm moving for real. <sigh>

In the meantime I'm reading Java Puzzlers in my Copious Free Time, and enjoying it thoroughly. It's the puzzles that I couldn't answer on my own that I enjoy the most. I've been bugging all my friends with them. :-) The best part is that it's finally encouraged me to read the JVM specification.

Knowledge Mining
Today I dropped into UQ for a short seminar by Osmar Zaiane, which he presented on Knowledge Mining. I'm glad I went, as it discussed several techniques of semantic extraction. In particular, it described the details of several methods based on Apriori.

It also served to remind me about how much I really remember about neural networks (more than I care to admit), and that they're still considered the best solution to some classification problems. Perhaps I should revisit them.

Some of these techniques, plus a few others that were mentioned, may help me to boost the level of semantic extraction I've been able to get so far at Herzum. I'll have to look into this area a little more.

While at the university I dropped in a form to defer my enrolment for a couple of months. With the new job, the new baby, Christmas, and the impending move to Chicago, I figured I might be able to use a short break. Bob agreed that it sounds like a good idea. So I'm officially off until April, though I expect to be working on Kowari and OWL a little in the meantime.

Tuesday, November 22, 2005

Projects
I've spent my last week and a bit on JNI. It was fun when I started (since I haven't done much C lately), but ultimately tedious. After all, I'm just writing a wrapper around someone else's library. Sure, there are some cute challenges, but it's not quantum physics (that reminds me... I really need to get back to postgrad physics at some point. I should finish this Masters).

In the meantime, I've been looking at a project for web services to keep my mind fresh. Or more precisely, I've been looking at another project when I'm not working or looking after the boys (yes, I'm averaging below 5 hours sleep per night). It's a project I've mentioned on and off for a while now, but I've decided to do something about it.

The principles of the project came about through a combination of several factors. I've had some experience with an application server (where I wrote the JMX code). I've also been working with OWL, from a theoretical point of view, an inferencing perspective, and using it for modeling in several systems. I've been working with UML modeling, and writing an OCL interpreter to manage those models at runtime. I'm also an engineer who really likes bit-banging as opposed to all this high-level abstraction stuff (though the high-level stuff does have its charms). Finally, I've had several interesting conversations with people who've helped me crystallize what I'm trying to achieve.

The Idea
The idea is to use OWL to describe a class that can be instantiated in Java. There are several aspects of OWL that don't make it ideal for this kind of modeling, but it is still possible to use it. This is demonstrated by the ability to map most UML features into OWL. Fortunately, the flexibility of RDF allows almost any conceivable type of annotation to be added to the class, filling in any areas where OWL is not up to the task.

What would be the point of this? Well the first things that comes to mind are Web Services (thanks to David for this suggestion). Currently, services can described in OWL-S. If a client does not know about a described service, then it can always try to model it (this technology is still in development). However, why simulate the model, when you can instantiate a class that meets the description of the model? This would perform better, and offer much more flexibility to the client system.

But How?
One approach for this would be to convert the OWL class into some implementing Java source code, write it to disk, and convert it into a class file with javac. I've never liked this approach. It is very slow, relies on the presence of the compiler and knowing where the compiler is, uses up disk space, and requires the entire file to be re-written for even minor changes. JSPs on Tomcat are a good example of this. Ever noticed how slow the pages are the first time you look at them? That's because the JSP is being converted to plain old Java, written to a source file, compiled to a class file, loaded, and finally run.

The way around this would be to have a compiler built in. This would avoid executing an external process. Then if the compiler could output to a memory buffer instead of a class file, the results could be fed directly into a class loader, without having to use the disk.

However, a normal compiler would still expect to work on Java source code, which is just text. This still leaves the system rather inflexible, requiring a full recompile for every modification. Ideally, I'd want a compiler that could work directly on the Abstract Syntax Tree (AST) of Java. This would allow for easy and faster modifications. Since compilers have to generate an AST internally, this would also make compilation faster, since the text to AST conversion could be skipped.

If the compiler is to be operating directly on the AST, then where would the AST come from? Normally the AST would come from Java text, but I'm trying to avoid having to repeatedly convert text into an AST. I'd like to build a class from OWL, so should I be compiling that into an AST every time? OWL is more structured that text, but it would still be a lot of work to repeat on a dynamic system.

Ideally, it would be possible to convert each of these into an AST, but to then persistently store the AST for manipulation and compilation at any time. At this point I realized that I have an RDF database (after all, I'm doing modeling with OWL), and this would be perfect for storing the AST. This started to open up new possibilities.

The system would involve storing a complete Java AST in RDF. While the schema definition will take a little while, there is nothing hard in this (the schema is not required for storage, but rather to understand the structure in order to implement the API for the AST). To get things into the schema will require something that can compile Java source text into this AST. There are several open source Java compilers, along with SableCC definition files for parsing Java source, so this should be reasonably straightforward as well. Compiling OWL into the AST is a different matter, but appears to be possible with a set of inferencing rules.

The final step is the transformation of the AST into class files. This is a well documented procedure, though one that I've yet to learn properly. I can always leverage an open source compiler's implementation, but I will need a good understanding of this process if I'm to customize it accordingly. Besides, I've been meaning to read the Java spec for years.

Once the class binary is generated, a custom class loader will let this class be immediately loaded and instantiated. This could be very dynamic, allowing infinitely flexible new classes, with methods customized at runtime. Building these classes from semantics documents like OWL-S means that the system can dynamically reconfigure itself to manage what it discoveries about the world.

AST
Pivotal to all of this is the AST. James Gosling once gave an interview about Java 1.5 (I can't find it anymore) where he talked about the inclusion of an AST API. He was also working on a project called "Jackpot" which provided this API. Obviously this never eventuated for Java 1.5, though it's been suggested for Java 1.6. So if I can't use an official AST, what should I be using?

Should I go with the internal AST of a working compiler, like Kaffe? Should I go with the AST given to me by a SableCC parser? I figured that standards are a good thing here, so I went looking for what other people use.

The one AST that seems to have the best penetration comes from Eclipse. This bothers me for a few reasons. First, it is still slow and occasionally crashes on my Mac (though recently it's been getting faster and more stable). Second is the steep learning curve there seems to be to get into the internals of Eclipse. Finally, when I looked at the structure, it appears more complex than the ASTs I've seen elsewhere (maybe it's somehow more complete?).

Anyway, I haven't coded anything yet, so I'm still looking.

Possibilities
Having an architecture that stores the AST, compiling data into it, and then emitting it into binary, has several other advantages. Obviously, it becomes easy to modify code programmatically, and it is possible to have a single system that can compile multiple languages into the one AST format (Java, Jython, or annotated OWL).

This kind of system also makes it easy to work backwards from binary to text. Existing class files can be decomposed into their AST, and an AST can be converted into Java source text. Jad already converts from class to source code quite successfully (I'm guessing it must use an AST internally), so a precedent has been set here. However, this system would provide extra functionality. It could take a class file at run time, decompose it, update the existing code (for instance, adding instrumentation to methods), and then reload the class. I've heard of systems which do this sort of thing (particularly for adding instrumentation), but not with an API to control the modifications.

Ontologies
Normally, I'd be horrified at the idea of letting programmers out with such a powerful API, but the purpose here is not to permit programmers to perform ad hoc modifications, but rather to modify code according to the model described in an ontology.

I think this idea offers some interesting options for dynamically implementing models, and also for using an ontology to describe a program which can then be built automatically.

Ontologies are an area of research that is still moving quickly. It's hard to know exactly what this would contribute to it, but I think it would be quite useful.

Saturday, November 12, 2005

Nic
For those who know me, I'd like to announce that Nicolas George Gearon was born at 3pm on Friday. Both he and his Mum are great (that's "Mom" for those of you in the US). Luc is still trying to work out why he is no longer the centre of attention.

Anne is a wonderful woman who never fails to impress me, especially so on days like Friday. However, she told me afterwards that if she ever does it again she'll use pain relief. I think that's fair enough. :-)

Thursday, November 10, 2005

Family
The baby was due 8 days ago, so I'm a little distracted at the moment. The doctor will be inducing Anne tomorrow morning, so I should be a father again tomorrow. I wonder how much sleep I'll get tonight?

Grammar
I'm doing some interesting things with grammar parsing at work at the moment. I've looked at a few open source grammar parsers, but the one that I like the most is Link Grammar. It doesn't parse sentences into structural elements like other parsers, but instead looks for how individual words can link together. I've been pleasantly surprised at how easy it has been to use. I'm especially impressed at how easy it has been to modify the grammar to generally handle unusual situations.

I don't know how much I should be discussing it here. Just thought I'd mention it all the same.

Objective C and Cocoa
I'm still enjoying this, but it hasn't all been plain sailing.

The other night I had a problem with some simple code that would not work. It kept failing when I tried to call a method (or "send a message" in Objective C parlance, since it's not really a method call). According to the debugger, it could not find the method I was looking for. This object was being deserialized from the NIB file, so I tried instantiating a new instance of the object, and sending the message to that object instead. It still didn't work.

Trying to see everything running, I put a printf into the init method of the object. At this point I discovered that init was not being run when I instantiated the object the second time:

MyClass obj = [[MyClass alloc] init];
However, the init method was running just fine when the object came out of the NIB.

Out of frustration, I re-built a lot of this project (maybe something strange was happening in the NIB, and I have no way to work on that directly - only through limited APIs and the Interface Builder application). While doing this I discovered that I could crash the application during the init method on my class. If I have the line:
CGPoint size = CGPointMake(300.0, 400.0);
then I could guarantee a bus error when returning from init.

So this looks like I've smashed the stack somehow, though I can't see what I've done. I'm guessing that it has something to do with some element of Objective C message passing that I don't yet understand.

I fixed the problem by moving the set up code into another method, and not having an init in my class. It works, but I'd love to learn what the problem is.

In the meantime, this may have pointed me to another problem I've been having. DavidM still runs OS X v10.3 (Panther), rather than v10.4 (Tiger). I tried sending him a copy of one of my programs, but it crashes mysteriously. It's even more mysterious, as 3 other people running v10.4 all found that it worked fine. Now that I've seen the problems with init I've started wondering if this could have been the cause. To complicate matters, the program in question is multithreaded, so it's possible that there's a race between the new thread and init method, and it only shows up on v10.3. I've re-arranged the initialization in this code, so I'm keen to see how it works. (Unfortunately, I haven't seen David online since I made this change).

Compiler Hissy Fits
On other occasions I've had problems where the compiler wouldn't allow me to call init directly on a newly allocated object. So instead of the following (copied directly from the Apple Mutithreading documentation):
  NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
I was forced to split it up:
  NSAutoreleasePool* pool = [NSAutoreleasePool alloc];
[pool init];
Now according to my understanding of Objective C, these are equivalent, so I don't understand why the first was failing.

However, I'm still learning. I'm even taking the time to read language documents now (taking the slow and methodical approach that I eschewed last week). Hopefully this extra insight will help.

Books
A couple of days ago I read the Slashdot review on Java Puzzles. I obviously have lots of spare time at the moment, so I decided to get a copy. I haven't been buying many technical books lately, as everything I want is usually online (and more up to date), but I love getting new books, so I thought I'd treat myself.

So far it's been a light read. I've only made it through the first couple of chapters, but I'm mostly getting the puzzles out. At least it's showing me that I do seem to understand the language spec reasonably well. However, I've picked it up from usage, experimentation, and knowledge of other language implementations. I really ought to read the spec one day! :-) (I keep meaning to. Maybe this will be my incentive?)

Wednesday, November 02, 2005

Work
Working on purely proprietary code makes it awkward to write about it in public. On top of that, it leaves my time a little too committed to do open source work.

So what is there to blog about? Well, there are still a few things, but having the baby due about now (yesterday, in fact) has been enough excuse to avoid writing. All the same, maybe I need to put some effort in. I keep finding myself wondering how I did something a few weeks ago, and the blog was supposed to help with that.

Cocoa
For a change of pace, I've spent my last two evenings learning Cocoa and Objective C. I'd avoided it before now because I didn't know Objective C, and wasn't sure I wanted to learn it when almost no other system makes use of it. Similarly, Cocoa is only available on the Mac, so there's no portability, unlike with QT or Java's Swing.

However, I like the Mac, and it's been bothering me that I haven't done any GUI coding for a while. I did some work with QT earlier this year, but it never looked quite right to me. (That would be more due to my knowledge of QT than anything else, as I've seen some attractive interfaces built with QT). I enjoy drawing mathematical shapes (yes, I could just learn more GNUPlot), and Quartz 2D sounds like fun. Since my after-hours work is all about fun, these seemed as good a set of reasons as any to try it.

I reasoned that there were two approaches I could try for this. The first was the traditional one of reading the docs and then applying my knowledge. That's very successful, and gives a very thorough knowledge of the subject material, but it also takes a long time. The other option is to learn just enough to get started, and then experiment with the documentation nearby. This technique leads to a much less thorough understanding of the system, and can result in accidentally taking a poor approach to a problem (since the best approach may not have been apparent). However, it gets you up and running faster. It's also more fun, because of the immediate feedback that you get. Given my criteria of fun and lack of time, the second approach seemed like a better idea.

What could I write that would be fun, and easy to write in my limited time in the evening? I decided to draw a simple curve which meets the equation:
x = sin(theta)
y = cos(3 * theta)

This looks like 3 cycles of a sine wave, wrapped around a clear cylinder. If I got it working, then I thought I should animate it (rotate the cylinder), and maybe introduce some controls to control the speed and number of cycles.

Objective C
I started with an ObjectiveC/Cocoa tutorial. This covered the development tools (Xcode and InterfaceBuilder) more than coding, but it covered a couple of the basics, and explained the basic syntax for method calls in Objective C (which I'd had trouble working out when I read example code). It turns out that calling methods like this is called "sending a message", which was the first hint I had of some of the "disconnected" nature of some of the features in the runtime system.

Once I thought I understood how to use Objective C, I left Xcode and tried my hand at some "Hello world" style programs on the command line. This worked pretty well, and helped me work out some more of the syntax of classes. I still have a LOT to learn (for instance, while I know @interface and @implementation, I haven't worked out what @protocol does). Still, I've always felt that if you can't do something like this manually from the command line, then you'll never be able to properly use the GUI tool that automates it for you.

The only thing I struggled with was the need to inherit from NSObject. This was needed for the alloc method, which does the heap allocation. I think I could implement this myself using malloc, but it was easier to use the Cocoa system instead.

It was only later that I discovered that ObjectiveC/Cocoa does memory reclamation when objects are no longer in use. An object called an allocator is used to assign memory for objects, and register them for later cleanup. The allocator is normally provided by the system, but it can also be set manually. This is essential to know when launching new threads, as there is no default allocator (it was while reading about Cocoa threads that I learned about automated object reclamation.

Other than memory management, there are two other features to ObjectiveC that stood out for me. The first is a kind of reflection which lets you analyze the available methods at runtime. This is interesting as it means that many objects don't need to inherit an interface, but just need to implement any required methods, ignoring anything not needed. This is the technique used by delegates.

The other feature is based on the numerous message passing mechanisms supported by the runtime. This reminds me of Windows messages, but seems to be built into the runtime (it was hard to tell what was part of the ObjectiveC runtime and what was provided by the Cocoa libraries). It can sometimes be difficult initializing an object with references to every other object that it needs to talk to, so a global message passing system can be really useful on occasion.

Observations
Over the years I've done GUI programming with the Windows API, MFC, Delphi, Xlib, Gnome/GTK, QT, Java AWT and Visual Basic. I'm certainly not an expert in any of these (at least, not anymore) but it's given me a taste of a number of approaches.

Just to provide some perspective on my impressions of Cocoa, here is a quick and dirty rundown on my impressions of each:

  • Xlib: Lots of work to do anything at all. I wouldn't use it without a modern library to wrap it, to provide an attractive and consistent interface, and to provide useful modern widgets.
  • Windows API: Less work than XLib, but there is still a lot of boilerplate code. Unfortunately a lot of Windows functionality now comes through object systems (COM+, ActiveX and more), and these can take a lot of code in C. All the same, once you have your head around the parts of the system you need, it's pretty easy to code. It can just take a lot of code to do some common tasks.
  • MFC: At face value this wraps the Windows API, and implements a lot of common tasks, making it easier and quicker to write code. However, the action and messaging system is a dogs breakfast. If I want to respond to something, then do I implement a virtual function in a subclass, or do I add an entry to the message map? If it goes into the message map, then is there already a macro for it, or do I register for a particular windows message? It is also too dependent on Windows messages. Some basic functions are provided to wrap message (eg. setText), but most of the time the response to any message is to send a new message. Where's the OO in this? Sidestepping the class framework is possible when needed (reimplementing the message loop, for instance), but painful. Not to mention the difficulties working with MFC and ATL together. My biggest gripe was the need to learn a new API every week.
  • Gnome/GTK: An extremely flexible system, running on Windows, OS X, and any X11 system. I've only used the standard C binding, but it comes with bindings for practically every useful language, including binary OO languages like C++. Glade also does a lot of the work in designing and implementing a GUI. My experience here was extremely limited, but I felt a little overwhelmed with options. The object design in C is good, but it does force certain development patterns on the programmer (this is probably a good thing, but feels constraining). The main reason I didn't go too far here was the sheer number of libraries needed to install on a non-Linux system to make your application work.
  • QT: The big competitor to GTK/Gnome, running on the same systems. The GUI editor feels a little clunky, and I found it hard to make layout managers work exactly the way I wanted on all platforms. It feels more like MFC than any of the other frameworks here, but the messaging system is a lot cleaner and more consistent. I think Trolltech could have improved over MFC if they'd chosen to integrate with the C++ STL, but they haven't. This is demonstrate in their QString class, whose operator+(...) methods don't always do as you expect.
  • Java AWT: Anyone using this interface will know why I stopped using it. Even Sun realized that the "lowest common denominator between windowing system" was never going to work, and moved on to Swing. However, it was easy enough to use, with all the work done with subclassing and methods to receive events. Layouts were limited, but their use was clearly documented. The main problem was the coding work required to set up the interface. This could be quite verbose, and needed to be run before the GUI was visible. I should just learn to use Swing, but I still hear about "clunkiness" problems. It's also buggy on OS X.4.
  • Visual Basic: Back to being Windows only, but very easy to set up and code. The moment you try to do anything useful you discover that you're badly hamstrung and can't do much. It was necessary to call directly into the Windows API much more than it should have been. I used to write a lot of DLLs and call them from VB. Even worse was the propensity to launch background threads that you were never told about, leading to code that could fail on some invocations, but not on others.
  • Delphi: I always found the syntax of this language to be too saccharine, but I can't deny the usefulness of this system. As easy to set up as VB, but with the power of MFC. I was particularly impressed when Borland made it fully compatible with C++ Builder. Most functionality is available in classes and methods rather than windows messages. Releasing Kylix for Linux was also a great move. It's a shame that there is no way to run it on any other system, such as OS X or Solaris. However, I can't afford the costs of Delphi, so I decided to leave it to the corporate types.

Overall, developing in Cocoa is really very nice. With object management and reflection, I almost felt like I was using Java, only easier. Everything I needed was in virtual methods, meaning I could do everything by subclassing. However, I decided that I wanted to shut down the application when the main window was closed, and I was concerned that I would therefore need to create a new subclass of NSWindow and use that as the class for the window. This is where I discovered delegates and some of the other notification mechanisms in use by ObjectiveC and Cocoa.

Delegates are objects given to a notifying object (such as an instance of NSWindow) which will be told when anything significant happens to the notifying object. The delegate object does not implement any interface and need not implement any methods except the ones it's interested in. This meant that I could simply have the main window inform an object of my choice when it was about to close, rather than having to introduce a new instance of a subclass of NSWindow, implementing a single method.

Of course, one of the nicest aspects of the system was access to subsystems like Quartz 2D. The resulting drawing had some nice extra features by default, such as antialiasing and buffering. The buffering became really apparent when I started animating the curve. I expected to do offscreen buffering and then start displaying completed images at the correct rate (there's even the NSViewAnimation class which will take these images and do the work for you), but Quartz did all the buffering for me. It even seemed to introduce some blur from one image to the next. The old drawing was wiped before the new one was created, so this appeared to be an intentional effect to make the animation seem smoother.

Some of the detractions include threads and the Interface Builder tool. Threads are managed with the NSThread class, but this really acts as a wrapper over some non-OO code that sets up a POSIX thread. The entry point to the thread can just be a method anywhere, rather than in a class specifically designed for threads. Also, the new thread has no default auto-release pool for objects, so to stop memory leaking it is important to set one up. This is trivial, but seems like the sort of thing that should have been done already. Also, operations like timers require a run loop to be processing, and again this has to be called manually. It works, but it's hardly OO.

The biggest problem with the Interface Builder was lack of documentation. The basics are covered in several tutorials, or when the help button is pressed, but I could not find advanced options mentioned anywhere. For instance, what does it mean to bind the value of a control to a user class instance? It seems to introduce a new dictionary with a key/value for the object's value, but where is this dictionary to be found?

To start with, the layout of the controls was difficult to work out, but I finally got it. I later found an obscure reference to it in some documents, but it still wasn't properly explained.

I also had several attempts to merge class files updated by Interface Builder with the originals in Xcode, but none were successful. In the end, I started generating new files from scratch, and manually copying the new elements over to the files in Xcode. I'm sure this can be made to work, but I didn't get it going. Again, documentation may be useful here.

Finally, on three separate occasions the Xcode IDE crashed when I clicked on a link in the documentation window. Normally the documentation is a separate program from the IDE, but Xcode can make requests of the documentation window all the time, so it all seems to be one program.

All these complaints aside, it is still a nice system. The Interface Builder lets you instantiate classes and then store them in the binary NIB file. These objects are then de-serialized at runtime, and are therefore instantiated on startup. This solves certain problems that can come up in object lifecycles, so I like it here. Connections between objects (between outlets and actions) are not clear unless the source object is selected in the GUI, but the concept is still nice. This is particularly useful as the objects at either end are extracted from the NIB, otherwise there would be no easy way to get references to either object to programmatically initialize this connection.

What do I think?
Weighing it up, this was one of the easier systems I've encoded on, and I was happy with the resulting program. For someone who likes coding (as opposed to a pure GUI configuration, as provided in VB or Delphi) then this is a nice and powerful system. The tool for building interfaces is very good, but doesn't integrate into the code well, which is why I deduct marks when comparing it to Delphi. I'm looking forward to writing more code in it. Maybe I'll check out why the curve isn't appearing when I do a print preview.

I commented the code, so I'm happy to give it to anyone who wants to see some example Cocoa code.

Thursday, September 29, 2005

Work
Since I'm working on commercial software now I'll be doing a lot of logging on the company's internal Wiki instead of here. I'll continue to talk about Kowari or study in here, but there are only so many hours in the day!

Remote Servers
As part of what I'm doing this week, I need to talk to a Wordnet RDF set. With my poor little notebook struggling on all the tasks I'm giving it, I figured that it made more sense to put the Kowari server on my desktop machine (named "chaos"). Unfortunately, I immediately hit a problem with RMI that had me stumped for some time today.

Starting Kowari on the desktop box worked fine. Querying it on that box also worked as expected. But as soon as I tried to access the server from my notebook I started getting errors. Here is what I got when I tried to create a model:

create ;
Could not create rmi://chaos/server1#wn
(org.kowari.server.rmi.RmiSessionFactory) couldn't create session factory for rmi://chaos/server1
Caused by: (ConnectException) Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused
Caused by: (ConnectException) Connection refused
My first response was confusion at the connection attempt to 127.0.0.1. Trying to be clear on this, I changed the request to talk directly to the IP address:
create ;
Could not create rmi://192.168.0.253/server1#wn
(org.kowari.server.rmi.RmiSessionFactory) couldn't create session factory for rmi://192.168.0.253/server1
Caused by: (ConnectException) Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused
Caused by: (ConnectException) Connection refused
I started to wonder if this was a problem with a recent change to Kowari's code (which was a scary prospect), and started looking more carefully a the code, and the logged stack traces.

The clue came from the client trace:
Caused by: java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:567)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:185)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:171)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:101)
at org.kowari.server.rmi.RemoteSessionFactoryImpl_Stub.getDefaultServerURI(Unknown Source)
at org.kowari.server.rmi.RmiSessionFactory.(RmiSessionFactory.java:132)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:274)
at org.kowari.server.driver.SessionFactoryFinder.newSessionFactory(SessionFactoryFinder.java:188)
... 13 more
Caused by: java.net.ConnectException: Connection refused
So the problem appears to be a connection to the local system, which isn't running a Kowari instance, so it fails. The error was occurring in the RmiSessionFactory constructor, but this seemed OK, and the stack above and below it in the stack trace was all Sun code. So what was happening here?

The relevant code in the constructor looked like this:
  Context rmiRegistryContext = new InitialContext(environment);
// Look up the session factory in the RMI registry
remoteSessionFactory =
(RemoteSessionFactory) rmiRegistryContext.lookup(serverURI.getPath().substring(1));
URI remoteURI = remoteSessionFactory.getDefaultServerURI();
The failure happens on the last line here.

What is the process, and how is this failing? Well, it starts by looking up a name server to get an RMI registry context. The important thing to note here is that this works. Since the RMI registry is running on the server rather than the client, then we know that it spoke to the remote machine and didn't try to use 127.0.0.1. So far, so good.

Next, it pulls apart the path from the server URI and looks for a service in the RMI registry with this name. In this case the name is the default "server1", and the service is a RemoteSessionFactory object. This also works.

The problem appears on the last line when it tries to access the object that it got from the registry. For some reason this object does not try to connect to the machine where the service is to be found, but instead tries to access the local machine. So somehow this object got misconfigured with the wrong IP address. How could that happen?

Since nothing had changed in how Kowari manages RMI, I started to look at my own configuration. Once I saw the problem, realised how obvious it was. Isn't hindsight wonderful? :-)

Nameservers
Once upon a time I ran Linux full time on Chaos. This meant that I could run any kind of service that I wanted, with full time availability. One of those useful services was BIND, allowing me to have DHCP dynamically hand out IP addresses to any machine on my network, and address them all by name. Of course, BIND passed off any names it hadn't heard of to higher authorities.

However, obtuse hardware, Windows only software, and expensive VM software that suddenly stopped working one day (it died after the free support period ended, and no, I can't afford support), slowly took their toll. I finally succumbed and installed that other OS.

Once Chaos started rebooting, I could no longer rely on it for DHCP or BIND. DHCP was easily handled by my Snapgear firewall/router, but I was left without a local nameserver.

New computers to my network are usually visitors wanting to access the net. This doesn't require them to know my local machine names, nor do my other machines need to access them by name. So I figured I could just manually configure all of my local machines to know about each other and I'd be fine. This is where I came unstuck.

The problem was that I had the following line in /etc/hosts on Chaos:
127.0.0.1  localhost chaos
I thought this was OK, since it just said that if the machine saw its own name then it should use the loopback address. I've seen countless other computers also set up this way (back in the day when people still used host files). For anyone who doesn't know, 127.0.0.1 is called the "loopback address", and always refers to the local computer.

This confused RMI though. When a request came in for an object, the name service sent back a stub that was supposed to connect to a remote machine named "Chaos". However, to prevent the stub from looking up the name server every time, it recorded the IP address of the server instead of the server's name. In this case it looked up /etc/hosts and discovered that the IP for that "chaos" was 127.0.0.1. The object stub then got transferred across the network to the client machine. Then when the client tried to use the stub, it attempted a connection to 127.0.0.1 instead of to the server.

The fix was to modify /etc/hosts on Chaos to read:
127.0.0.1  localhost
192.168.0.253 chaos
So now the stub that gets passed to the client will be configured to connect to 192.168.0.253. This worked just fine.

So now I know a little more about RMI. I also know that if I ever get any money, I really want a spare computer so I can boot up Windows and not have to take my Linux server offline to do it.

Tuesday, September 20, 2005

Logic
Work is keeping me busy in Chicago at the moment. That's not to say that I'll write more when I get home, but I hope I will.

Yesterday I was in a meeting being told about the semantic products provided by Exeura. Exeura is a commercialization spin off from the University of Calabria, so many of the products are using new technologies that are not fielded commercially elsewhere yet. In the course of the meeting, one of the products was described to be based on "Disjunctive Logic". When queried on this, the explanation was that Disjunctive Logic is a type of logic used commonly around the world.

Now I know there are a lot of logic families, but I hadn't heard of this one. So I went looking. Funnily enough, most of the useful references I found were publications from the very people at Exeura. That's not to say they invented Disjunctive Logic, but they are some of the co-authors of the DLV project, which is one of the only disjunctive logic processing systems. I'll confess that this made me a little cautious about it's readiness for commercialization, but I'll have to reserve judgment until I see it.

I looked up Disjunctive Logic to try to learn what makes this branch of logic special. The first paper I decided to read was co-authored by the same people at Exeura, so at least I knew I'd be looking at the same thing. Almost straight away I saw the following:
Disjunctive logic programs are logic programs where disjunction is allowed in the heads of the rules and negation may occur in the bodies of the rules.

That's when the little light came on for me!

Everything I read over the next few pages was exactly what I expected to read. This is because I've been running into this all the time recently. I've been using cascading equations in ordinary description logic in order to avoid disjunctions in the heads, and here is a tool that is specifically designed to allow for that. I still have to read all the details, but I can see how useful this could be.

Sudoku
The first example of disjunctive logic that I can think of is in a non-trivial game of Sudoku. I first played this 2 weeks ago, discovering quickly that it was just a simple logic puzzle. I expect that most programmers were like me, and immediately started thinking about how to solve the puzzle with a computer program. It seems to come down to 3 simple rules, and I've noticed that the "harder" the puzzle (according to the rating system in the book I have), the more of the rules you have to employ in order to solve it.

I use the name "group" for all the squares in a 3x3 grid, a row, or a column. I'll do that here for clarity.

It's my third rule which is relevant to disjunctive logic. It states:
If there are n squares in a group which contain exactly the same n possible numbers, then those numbers are not possibilities in any other square of that group.

(Actually, there's a corollary: If there are n squares in a group which contain at least the same n possible numbers, and those n numbers appear in no other squares of the group, then any other possible numbers in those n squares may be eliminated)

So what is this saying? For instance, consider a column with several empty squares. For two of these squares we've determined that they could only be the numbers 2 or 3. That means that we know that if the contents of one square is a 2, then the other must be a 3, and vice versa. However, which don't yet know which way around. However, this is enough to tell us that no other square can be a 2 or a 3. If there were another square which had been narrowed down to being a 2 or a 5, then the 2 can be eliminated, letting us put a 5 in there.

So even though we only had partial information of the contents of 2 squares (we knew the numbers, but not which way around to put them), it was still enough to tell us how to fill in another square. This is a case of disjunctive logic, as our result (the head of an equation) was an OR between two possibilities, but this was still enough to solve for the body of the equation.

OWL
This also works with OWL, particularly the cardinality questions I was struggling with some time ago.

If a class has a restriction:

  <owl:Class rdf:ID="MyClass">
<owl:intersectionOf rdf:parseType="Collection">
<owl:Class rdf:ID="MyOtherClass">
<owl:Restriction>
<owl:onProperty rdf:resource="#myProperty"/>
<owl:maxCardinality>2</owl:maxCardinality>
</owl:Restriction>
</owl:intersectionOf>
</owl:Class>
Then objects of type MyClass can only refer to two objects with the myProperty predicate. So if I have the following:
  <namespace:MyClass rdf:about="#myClassInstance">
<myProperty rdf:resource="namespace:A"/>
<myProperty rdf:resource="namespace:B"/>
<myProperty rdf:resource="namespace:C"/>
</namespace:MyClass>
Then I know that two of A, B or C must be the same thing. eg. If A and B are the same:
  <rdf:Description rdf:about="namespace:A">
<owl:sameAs rdf:resource="namespace:B"/>
</rdf:Description>
Of course, the possibilities are: A=B or A=C or B=C. (where I'm using = like owl:sameAs).

So this is a similar situation to the Sudoku example. We know there are only 2 objects, but we have 3 labels for them. That means that we have partial information, but like the Sudoku example, the partial information is still useful.

The question is, how do I process this partial information? Until today I had no idea. Now it appears that Disjunctive Logic was specifically designed for this situation. :-)

Of course, this is only relevant if there is useful processing to be done. Unlike Sudoku, OWL can say lots of things with no consequences. For instance, using a predicate more often than specified in an owl:maxCardinality restriction will not create an invalid document unless there are sufficient owl:differentFrom statements to differentiate the objects. It is impossible to violate owl:minCardinality unless the range of the property is too restricted in number (an uninstantiable class, or a class of owl:oneOf). I've talked about this in the past.

So with such an open system, with the extra processing allowed by Disjunctive Logic actually gain me anything? I'm not sure yet. Give me some time to find out.

Monday, September 05, 2005

Work in the USA
There is no technical content here at all, so I hope you're not expecting any. Hey, these are my notes, so I can write about whatever interests me, right? :-)

After my last day with NGC, I still had some things to do before I went home.

Some months ago, Herzum Software from Chicago got in touch with me about doing some work with them. They'd previously been doing business with Tucana, and were interested in Kowari and the inferencing work I've been doing. After several phone calls, etc, these discussions turned into an offer of work, based in Chicago.

I thought about this for a while. There were a lot of reasons to turn them down. Things have been going really well for me this year, and I've been enjoying the opportunity to work for myself, and concentrate on those areas of interest to me. I've also been looking forward to working with Andrae on full time Kowari work. I love the lifestyle in Brisbane, and I've been earning enough that we have been living quite comfortably. Considering this, it seemed like an unnecessary move, particularly with a second baby on the way. In Brisbane we have friends and relatives nearby who can help out.

On the other hand, contracting has its downsides. There is always the concern of finding the next job, and the bank being unwilling the finance the move to a bigger house (no more expensive than what we have, but Australian banks won't talk to someone without a guaranteed income).

I've also wanted the experience of permanent work overseas, and it would be much better to do it while the children are young. The requirement of a visa means that I've have to work for someone else, no matter how much I like my current independence. So it seemed that it would be worthwhile considering their offer.

There were also a couple of opportunities on the east coast of the US. I was interested in these, partly because of proximity to other large companies involved in semantic web technologies, MINDSWAP, friends like DavidW, and quick trips to Europe. However, Anne liked the idea of an architectural city like Chicago, and everyone we spoke to had great things to say about the place (except for the cold in winter).

After swinging back on forth on the idea, I decided to visit Herzum at the end of my trip to help me work out what I wanted to do.

Chicago
This time I flew out of BWI (I couldn't have handled another trip to Dulles during peak hour). I killed a few hours in the bar there with one of the guys from NGC (thanks Clay!), before catching my flight to Midway. As usual, the plane was delayed through bad weather.

Herzum keep their own apartment very close to their office (both are located in the "Loop" in the city), and they'd offered to put me up there while I visited. This was my first opportunity to meet the CTO, Bill, whom I'd been conversing with for some time. He was every bit as hospitable as our conversations had led me to believe.

I spent the Friday talking with the developers, discussing what it is that they do, while also discussing aspects of RDF and semantics that may be of benefit to them. In general, I was impressed with everything I heard.

There did appear to be a singular focus on the design paradigms set out in the book written by the company founder Peter Herzum (who was travelling at the time, and not available to meet, unfortunately). However, it also made sense that this would be the case.

I got an opportunity to have lunch with everyone there, and a few of them took me out to dinner nearby as well. Afterwards I was taken on a quick walk up to the Tribune building (quite a famous landmark, and the inspiration for many scenes from "Batman" comics), and got to see some of the other architecture on the way back to the apartment. So far I was having fun. :-)

After a slow start on Saturday, I met with one of the guys to visit "Bodyworks" at the museum, though the tickets were sold out. I still enjoyed getting to see a lot of the sights as we drove about, and we finally went back into town to see the John Hancock building. We had a drink at the bar while admiring the view, and then met another developer to have a late lunch. We then caught a movie and finally headed on home.

I'm glossing over all of this, but I had a great time. More importantly, I quite enjoyed the conversation of the people I was with. They all seemed quite intelligent, and demonstrated a great knowledge of their field. I could definitely learn something from each of them. It is important to me for my co-workers to have these qualities, and it is the reason I enjoyed working at Tucana so much. This reason alone was enough for me to give greater consideration to the position.

Wandering Sunday
I got back to the apartment reasonably early on Saturday evening. Bill had left that day to fly home to California, so I was on my own. I was missing my family and didn't want to be on my own, so I decided to go out to find a bar. I did my best to speak with people in the hope that my accent would land me in some interesting company. :-) This worked out OK, and I met up with quite a few people. I even stopped in at a McDonalds late at night for a snack on the way home (I was already gaining weight on this trip, so why fight it?) ;-) That mightn't seem like a big deal, but I don't normally eat at places like that, and Anne would have roused on me. As a point of trivia, I discovered later that this McDonalds was one of the largest in the world (It did sort of seem large at the time!)

Sunday morning I was supposed to meet up with Luigi, the VP at Herzum, who had just arrived back in town. He had to catch up with family, so he suggested that I go up to Lincoln Park to look around at a potential place to stay if we move over (I don't know if we can afford it, but it was worth a look anyway). I walked down to "State and Lake" to catch a train north, and then walked east towards the park.

I tried dropping into a general store for a light snack for lunch, but I was disappointed to discover nothing more nutritious than Twinkies and crisps. I really hope that place wasn't indicative of the general standard of snack food in the States!

I started with the Conservatory at the park, and then moved into the zoo. It was an enjoyable walk, but warmer than I expected Chicago to be. Not as hot as Brisbane gets, but still uncomfortably warm (why did I go into the Conservatory on a day like that? I must have been slightly crazy from walking in the heat!).

Walking along the lake back down to the city I saw just how many people go to the "beach" on a hot day in Chicago. It looked quite inviting, despite the lack of surf. I've never seen anything like the great lakes before, and this more than anything else made Lake Michigan look to me like an "inland sea".

That night I got to meet Luigi for the first time as we went to a nearby tapas bar for dinner.

Home Stretch
Monday was a little quieter. After packing, I went down to Herzum's office where I spent my morning talking with some of the staff and looking at Peter's book. Luigi went through some of the details of the offer, but at this stage I wasn't sure that it looked all that good. Besides, I was tired and missing my family, so I didn't trust myself to make a decision (though any decision to take the job would need agreement from Anne!).

Steve, the CFO, took me to lunch, and then I was off to the airport to get home (thanks to Dan for making sure I caught the right train to get to Midway on time).

I met some nice people on both flights home. On the Midway-LA flight I met an Ironman triathlete (I look up to these guys) who told me about the XTerra series, one of which she had just competed in. And crossing the Pacific I met a Melbourne lady I've now become friends with who's husband is about to take a job in the triangle are in North Carolina (one of the few places in America that I've been to).

Job
Since getting home I've spent quite a bit of time just trying to get over jet lag and spending time with Luc. This is one of the reasons why my blogging has been so sporadic recently.

After some further talks with Luigi, Anne and I decided that I should take the job. This is a big deal, and has us both a little intimidated!

To start with, I'm a contractor working remotely with Herzum until I can get a visa to become a full time employee. Fortuitously, Australians are now eligible for an E3 visa, instead of the old H1-B, which should make things a little easier for us (it will let Anne return to work eventually). All the same, the visa application will take a little while, so we have to wait while that comes through.

Part of my agreeing to the position was that we wouldn't have to move until January. This is just because the baby is due on the 3rd of November. For reasons of both cost and family support we definitely want to have it here in Brisbane. We're told that we shouldn't be making a major move with a newborn for at least 6 weeks, which takes us right into Christmas. Even if we wanted to move then (which we don't), it would be insane to try it during the peak travelling season. So we'll spend a couple of weeks seeing each of our families over Christmas (they will want to spend time with the grandchildren before we go overseas) and then move to Chicago in the first week of January. What a wonderful time of year to move to Chicago! :-)

In the meantime, I'm learning as much about Herzum as I can, and also looking into some of the software I may be working with. I'll continue to blog, but since I'll be working with commercial systems I may have to narrate in generalities. I don't expect things to be a real problem, as part of my work will be on open source systems (such as Kowari, or Lucene) so I should still be able to make comment on what I find.

I already have a few things to say about what I've been reading lately, but given the current late hour, I'll leave that for another time.

Sunday, August 28, 2005

Out of Hours
The rest of my week was almost as busy as the time I spent elucidating Kowari.

On Tuesday DavidW and I went down to the University of Maryland's MIND lab to meet some of the MINDSWAP group. We were shown a very impressive demonstration of ontology debugging in Pellet, using a couple of different methods labelled white-box debugging and black-box debugging. As the names imply, the white-box method carefully follows the reasoning used by Pellet, while the black-box method looks at inputs and outputs. I'm not sure how much of this could be automated, but as a tool for debugging it was really impressive. It was enough to make me consider hooking Kowari into the back end of Pellet.

In fact, hooking Pellet into Kowari has a couple of things going for it. First, it gives me a point of comparison for my own work, both in terms of speed and correctness. Ideally, my own code will be dramatically faster (I can't see why it wouldn't be). However, having followed Pellet for a little while, I'd expect it to provide a more complete solution, in terms of entailment, and particularly with consistency. The second reason is to provide a good set of ontology debugging tools.

Kowari Demo
I was also asked to give a demonstration of the new rules engine in Kowari. I'd been running it for a couple of weeks at this point, and trusted it, but it still made me nervous to show it to a room full of strangers, all of whom understand this stuff. Everyone seemed happy, but it gave me a little more motivation to get back to work on completing the implementation.

Fujitsu has a lab upstairs from MINDSWAP, and a couple of their people had asked to come along to meet me. While we were there, they asked me several questions about how to make Kowari load data quickly. It seemed that they were sending insertions one statement at a time, so we suggested blocking them together to avoid some of the RMI overhead. They also invited me back the following day to see what they've been doing.

OWL for Dinner
Afterwards, DavidW and I went to dinner with Jim Hendler. Nice guy, and he was quite happy to answer some of my questions about RDFS and OWL. The one thing I remember taking away from that night was a better understanding of the agendas of the data modellers and the logic people participating in OWL. This culminated in the explanation that RDFS is that part of OWL that everyone could easily agree to in the short term, thereby enabling an initial release of a standard for this kind of work. This explains quite a lot.

It wasn't explicitly mentioned, but I sort of inferred that the separation of OWL DL and OWL Full was the compromise arrived at between the ontologists who needed to express complex structures (OWL Full) and the logic experts who insisted on decidability (OWL DL).

Fujitsu
The following night I was back down at the University of Maryland, this time visiting Fujistsu labs.

The evening started with a follow up question about bulk loads. There were two problems. The first was that they were running out of memory with insertions, and the second was the speed.

The memory problem turned out to be a result of their insertion method. It seemed that they were using a single INSERT iTQL statement, with all of the statements as a part of a single line. Of course, this was being encoded as a Java String which had to be marshalled and unmarshalled for RMI. Yuck. As a quick fix I suggested limiting the size of the string and using several queries (this worked), but I also suggested using N3 as the input format to avoid RMI altogether.

The speed problem was partly due to the RMI overhead (it was marshalling a LOT of text to send over the network!), but mostly because the insertions were not using transactions. I explained about this, and showed them how to perform a write in a single transaction. The result was a speed improvement of an order of magnitude. I'm sure that made me look very good. :-)

While there I was also shown a project for a kind of "ubiquitous computing environment". This integrated a whole series of technologies that I was familiar with, but hadn't seen together like this before.

The idea was to take data from any device in the vicinity, and direct it to any other device that was compatible with the data type. Devices were found dynamically on the network (with Zeroconf, IIRC) and then queried for a description of their services. These descriptions were returned in OWL-S, providing enough info to describe the name of the service, the data formats accepted or provided by the service, URLs for pages that control the service, and so on. They even had a GUI configuration tool for graphically describing a work flow by connecting blocks representative of the services.

As I said, there was no new technology in this implementation, but it's the first time I've ever seen anyone put it all together and make it work. The devices they had working like this included PDAs, desktops, intelligent projectors, cameras, displays, databases, file servers and telephones. It looked great. :-)

Friday, August 26, 2005

Final Week
Once making it to the final week, the plan was to go through the remaining layers of the storage code, and use any remaining time to go through examples and questions.

There are three main areas components of functionality in Kowari's storage layer: The node pool, the string pool, and the statement store. Once upon a time these three operated together, but now the statement store is off on its own behind the new Resolver interface, while the node and string pools are accessible to the whole system. However, their unified functionality has not changed.

All three components handle transactions autonomously, managing the phases of all their underlying components. The overall Session class uses a two-phase commit operation to keep the transactions of each component in synch with the others. It is also in these top level components that the files are all managed. The files which are created and manipulated at this level are generally used by the classes at lower levels (for instance, the BlockFiles and IntFiles which are used by FreeList and AVLTree) but there are also other files which handle session locking, and persistence of phase information for the transactions.

Once I'd reached this level, I had all of the information needed to explain the data formats in each file. It was awkward to explain the structures before this stage, since several important structures (notably, the phase information) contain information from every layer. Trying to describe a single layer leaves large holes in the structure, and has led me into confusing conversations in the past when I try to skip over these holes. But at this point I was finally able to write up the file structures, one byte at a time (the values are typically 64-bit longs, but ints and bytes are occasionally used).

I'd like to properly document each of these components, along with the associated file formats, but for the moment I'll just give an overview.

Node Pool
The idea of the node pool is to allocate a 64-bit number to represent each resource in the RDF data. We call these numbers "graph nodes", or just gNodes. GNodes get re-used if they are freed. The re-use is for several reasons, the most notable being to prevent an numeric overflow (it also helps the string pool if there are few holes in the address space for nodes). However, a resource ID cannot be re-used if there are any old reading phases which still expect the ID to refer to the old data.

These requirements are exactly met by a FreeList, so the node pool just a FreeList along with all the code required for file management and transactions.

String Pool
The string pool holds all of the URI References and Literals in the database. When it was first written, the only literals we stored were strings, and since URI's are also represented with a string, we called the component the "string pool". The string pool stores lots of other data types as well, but the name has stayed.

The string pool provides a mapping from gNodes to data objects, and from those objects back to the gNode. It also provides a consecutive ordering for data so that it can be used to easily work with a range of values.

The mapping of a gNode to the data is done with a simple IntFile. Each data element can be represented with a buffer of fixed length (overflows for long data types such as string are stored at a location referred to in this buffer). To find the data buffer for a given gNode, the gNode number is multiplied by the record size of the buffer. This is why the string pool prefers the node pool to re-use gNodes, rather than just incrementing a counter. Given that these records are all the same length, I'm not sure why a BlockFile was not used instead of an IntFile, but the effect is the same.

The mapping of data back to the gNode is accomplished by putting all data into an AVLTree. The records in this tree are identical to the records in the IntFile, with an addition of the gNode to the end of the record. The tree also provides the ordering for the data. This allows strings to be searched for by prefix, URIs to be searched for by domain, and date or numeric ranges to be found.

One problem with this structure, is that it is impossible to search for strings by substring or regex. This is why we have a resolver for creating Lucene models. However, it's been in the back of my mind that I'd love to see if I could build a string pool based on a trie structure. (Maybe one day).

The data structure holds up to 72 bytes in the record. Anything longer than this (typically a string) has the remainder stored in a separate file. We have a series of 20 files to handle the overflow, each storing blocks twice the size of the blocks in the previous file. This lets us have random access to the data, while reducing fragmentation. It also allows us to store data objects of up to several GB, though we don't expect to ever need to handle anything that large.

When the string pool and node pool are combined, they provide a mechanism for storing and retrieving any kind of data and associating each datum with a numeric identifier.

Statement Store
The statement store is the heart of Kowari's approach to storing RDF.

Each RDF statement is stored as a quad of the form subject, predicate, object and model. We originally stored a triple of subject, predicate, object, but quickly realised that we needed the model (or graph) element as well. Again, the interfaces already existed, so the statements are referred to throughout the code as triples rather than quads.

These statements are stored in 6 different AVL trees, with each tree providing a different ordering for the statements. I've already discussed the reason for this at length. Ordering the statements like this allows use to treat the storage as an index.

Of course, the representation of the statements is with the gNode IDs for each resource, rather than the resources themselves. This means that the indexes contain numbers and nothing else.

While simple in principle, the code here is actually quite complex, as it has numerous optimisations for writing to multiple indexes at once. Unfortunately for me, several of these optimisations were introduced after I last had a hand in writing the code, so it needed a little effort for me to understand it sufficiently to explain it to others.

Each of the indexes is handled by a class called TripleAVLFile. This class knows about its required ordering, and manages its own AVLFile. The nodes in this tree actually represent a range of RDF statements, with a minimum, a maximum and a count. By handling blocks of statements like this, the overhead of maintaining the tree is reduced, and searching is increased by a significant linear amount (since it's linear it doesn't show up in a complexity calculation, but this is the real world we're talking about, so it matters). Once the correct node in the tree is found, then it contains a block ID for a block in a separate ManagedBlockFile which contains all of the RDF statements represented by the node.

The 6 TripleAVLFiles manage both their trees and the files full of blocks (the AVLFile and the ManagedBlockFile). This is simple enough when simply reading from the index, but takes some work when performing write operations. Trying to insert into a full block requires that block to be "split" in a similar way to node-splitting in B-trees, but with co-ordination between the AVL tree and the block file. Writes are also performed in a thread owned by the TripleAVLFile, so that multiple modifications to a single location in the index can be serialised rather than being interspersed with writes to the other 5 indexes.

The details of these and other optimisations makes this code a complex subject in itself, so I'll leave a full description for when I get around to proper documentation. I should comment that each of these optimisations were only adopted when they were proven to provide a benefit. Complexity can be the bane of performance, but DavidM did his work well here.

Reads and writes are managed by the statement store, with reads being directed to the appropriate index, and writes being sent to all 6 indexes. The other job of the statement store is to manage transactions, keeping all of the indexes reliably in synch.

Integrating
A description of the storage layer is completed by describing how the Node Pool, the String Pool, and the Statement Store are all tied together.

When a query is received by the server, it must first be localized (I'm using the American "z" here, since the methods names use this spelling). This operation uses the String Pool to convert all URIs and literals into gNode numbers. If an insert is taking place, then the Node Pool is also used to allocate new gNodes where needed.

A query's principal components are a FROM clause (which is an expression containing UNIONS and INTERSECTIONS between models) and a WHERE clause (which is an expression of conjunctions and disjunctions of constraints). Each constraint in the WHERE clause may have a model specified, else it will operate on the expression in the FROM clause. To evaluate a query, the FROM and WHERE expressions need to be merged. This results in a more complex constraint expression, with each constraint having its model specified. The operations in the FROM clause get transformed into disjunctions and conjunctions in the new constraint expression, hence the increase in complexity for the expression, though the individual constraints are still quite simple.

The server then iterates over the constraints and works out which resolver should be used to evaluate each one. In most cases, this is the "System Resolver" as defined by the PersistentResolverFactory tag in the kowari-config.xml file. By default, this is set to the Statement Store described above.

Once the resolvers for each constraint are found, the constraints are sent off to be resolved. The Statement Store resolver tests the constraint for the location of variables, and uses this to determine which index is needed. It finds the extends of the solution, and returns an object containing this information and a reference to the index, so that the results can be extracted through lazy evaluation.

Next, the results are "joined" according to the structure of the constraint expression. These operations are done with lazy evaluation, so it is not quite as simple as it sounds, particularly when optimisation is considered, but the method calls at a higher level are all straightforward (as you'd expect).

The final results are then globalized by using the String Pool to convert all the numbers back into URIs and literals. The globalized answer then gets serialized and sent to the client.

Insertions and deletions use a similar process to what I've just described.

This really glosses over the details quite a bit, but it provides an outline of the process, and explains what I was doing for most of the week. When I get time I hope to document it properly, at which point this post should help by providing me with an outline.