Friday, March 31, 2006

Programming


I've actually been doing some programming at home lately, which seriously cuts into the time in which I might blog. At least I've felt productive. That may have to take a back seat for a few days, as we're moving apartments this weekend.

Brian at work has created a small auditing library, which can be called by developers to log when methods are entered and exited. The cool thing was that he has turned the results into some cool interactive SVG that plots the course of the program.

At this point I remembered that Java 1.5 now includes instrumentation. If this could be used to automate the calling of the auditing methods, then this would remove the burden from the programmer (who, lets face it, could get it wrong, or just be too lazy to bother). So I started learning a little about the new interfaces.

It turns out that instrumentation is quite simple. In fact, it's too simple. Look at the size of the API, and you'll see that there are only a couple of methods. All it does is inform you when a class is being loaded, and provides you with read/write access to the bytecode for that class. That's about it. If you want to take it any further, then you have to get your hands dirty and manipulate the bytecode.

Fortunately the articles I found online did just this. The library generally used for this is ASM. There are several others, but ASM has the advantage of being simple and lightweight.

ASM also has one killer feature that I have yet to see in other bytecode manipulation libraries: an Eclipse plugin with some great features. This plugin not only lets you view the bytecode of your current code, but it also lets you switch to a view where you can see the ASM code required to generate the bytecode for your current code. So all you have to do is type in an example of the code you'd like to generate the bytecode for, and ASM will show you the ASM library calls you make. This almost makes coding with ASM trivial.

So I've been using ASM to instrument code, and generating pretty SVG plots of the call stack over time. I've learnt quite a bit about Java class loading in the process. It's also moved me one step closer to being able to generate classes out of OWL descriptions (yes, I can use UML, but that's not very serializable now, is it?). Probably the biggest trick was working out how to not audit the code doing the auditing (I hit some infinite recursion on my first attempt). However, this ended up being trivial when I realised that I'm just auditing a stack, which by definition is associated with a thread (Brian's library records method entries and exits based on the current thread). All I needed was some thread-local-storage to hold a lock on the auditing process. Then if the auditing code saw that the lock was being held, it skipped the auditing and returned to the original task. The best part is that its all local to the current thread, so there's no synchronisation required.

This kind of bytecode manipulation was amazingly similar to the Windows API interception I was doing a couple of years ago. I enjoyed it then too. It was just as tough to debug as well, since debuggers don't expect to see bytecodes modified from the condition the compiler left them in. The classes drop all line numbers, and tracepoints only start to make sense when the instruction pointer moves to an unaffected class. I've also started running into the following exception when I use instrumentation on code with a logger in it:
java.util.logging.ErrorManager: 5
java.util.MissingResourceException: Can't find bundle for base name sun.text.resources.DateFormatZoneData, locale en_US
I then get shown a deep stack, which would seem to indicate that the method-calling modifications from ASM are all correct. So it looks like the classpath is wrong at this point. Frankly I'm not surprised, as I've seen different classloaders coming in during the startup process of an application (it's odd seeing an exception about a missing class when the code in question is being called from inside the class that can't be found!

Still, if I avoid the logger, then all seems well.

Wednesday, March 15, 2006

Macs


I've had some new toys to play with in the last few days, so my documentation of Kowari has slowed down. I expect to be back to it in a couple of days (though not tonight, and tomorrow is Anne's birthday).

In the meantime, I've been looking at compiling things in Eclipse on the Mac.

For a start, our new Mac Mini (which runs Java code 4 times faster than my PowerBook!) can't run Eclipse. The lack of an Intel version is listed as a bug, but it hasn't been addressed yet. As I type this I have a set of scripts downloading ALL the Eclipse source code, in the hope that I can find the JNI libraries I need. So far I've found one of the 6 I need.

This was all so I could try compiling a big project from work in an Eclipse environment.

Finding Files


While setting up the files for this project from work, I discovered that a Jar file was missing from the build path. I still like to use locate to look for files by name, as Spotlight only seems to be any good when looking for files by content. However, since the file had only just come in through CVS, it wasn't going to be in the locate database.

Of course, the simple thing to do here was to run find, but it reminded me that I wanted to update the locate database. Of course, I couldn't remember the name, as it's different to the simple updatedb found on Linux. Fortunately "locate updatedb" helped be remember that I was looking for locate.updatedb.

This led me to wonder why the update doesn't happen much on this machine. The computer is often on overnight, yet locate regularly fails to find files I know I've had for a while. I know that something has supplanted cron on the Mac, so I did a man locate.updatedb. This pointed me to the cron directory of /etc/weekly, so I decided to check who was supposed to call it and when.

/etc/weekly is a symlink to /etc/periodic/500.weekly, so I grepped through /etc and discovered:
    crontab:# The periodic and atrun jobs have moved to launchd jobs
Now this looks like the cron replacement! Running which launchd returned /sbin/launchd, which did little more than tell me it is an admin command (obviously). So I checked out the man pages.

The man pages explain that launchd is a new system to replace rc, though there are still a couple of elements going through the old rc system. launchd is configured with XML (as are most of the modern features in OSX), using files found in the following directories:
     ~/Library/LaunchAgents         Per-user agents provided by the user.
/Library/LaunchAgents Per-user agents provided by the administrator.
/Library/LaunchDaemons System wide daemons provided by the administrator.
/System/Library/LaunchAgents Mac OS X Per-user agents.
/System/Library/LaunchDaemons Mac OS X System wide daemons.
Examples are quicker to look at than documentation, so I checked these out, and discovered that all but the final directory was empty. However, the contents here included a file called com.apple.periodic-monthly.plist which appeared to be what I was after. Looking inside I found that it was all XML, as advertised, including the following fragment:
  <key>ProgramArguments</key>
<array>
<string>/usr/sbin/periodic</string>
<string>weekly</string>
</array>
Explanations of this and the rest of the file are found in man launchd.plist. But even without looking in the man pages I could see that it was calling a program called periodic to do the work when the time arrived.

So now I knew that launchd did the scheduling, and I could change the time if necessary by using the XML file /System/Library/LaunchDaemons/com.apple.periodic-monthly.plist.

But what if I wanted to run the process manually (like, right now)? So I ran file /usr/sbin/periodic, and was told:
/usr/sbin/periodic: Bourne shell script text executable
A quick look inside this script showed that it takes an argument of "daily", "weekly" or "monthly", with predictable results.

So I could run periodic directly, but the man pages from earlier made me wonder if I can do this with a "control" program for launchd called launchctl?

Looking at the man pages again told me that launchctl could be run interactively, which sounded like a better way to test things out. The program seemed unresponsive until I told it to "load /System/Library/LaunchDaemons", which got things moving along nicely. I'm still not sure if these processes were loaded before I put them in, and I didn't want to reboot to see if the clean machine had them.

I started a few of the processes, but 3 of them would not launch, and the system kept respawning them. I was worried that this try/die loop was going to go on indefinitely (or until I learned how to stop it), but fortunately the system worked out what was happening, and stopped them all.

In summary, it seems that launchd turns non-terminating programs that communicate with stdin/stdout, and turns them into logged daemons. Having a process that finishes quickly (like periodic) doesn't seem to be what it's for. So to run periodic manually I needed to call it directly.

That was a long winded way of making sure my locate database stays up to date, but I'm pleased I learned something about the system while I was at it. :-)

Wednesday, March 08, 2006

Documentation

Apologies to anyone not interested in reading my description of the inner workings of Kowari.

There's obviously a lot happening while I try to get used to a new city and lifestyle, so I haven't returned to my research yet. I'm also working on commercial code, which means I have to be more circumspect about what I'm doing. This all means I haven't had the time or motivation to do new things in my own time, making it hard to write about anything interesting!

As an exercise in re-familiarizing myself with Kowari, and keeping in the habit of writing (which can be hard to start again once you've stopped) I decided that I should write about how each class works. I mentioned this shortly after presenting all of this for NGC, but it's taken me this long to get around to it.

I'll write about more interesting things as they come to hand. For instance, I was told yesterday that a major impediment to the adoption of semantics in some areas has been the lack of a scalable statement store. Argh!

AbstractBlockFile

This class does not add much to the semantics of BlockFile, but it has a few internal details worth mentioning.

AbstractBlockFile contains a static block which determines the byte ordering to be used for all files. This is generally the native format for the platform, but maybe overridden by setting the system property tucana.xa.useByteOrder to native (the default), big_endian, or little_endian. This should usually be left alone, but it may be necessary if files have been moved from one computer to another with a different CPU architecture. Upgrading an Apple from PowerPC to the new Intel Core Duo is an good example of the need for this option.

The constructor for AbstractBlockFile performs several checks. It starts by checking if the file is already present in a static set of files. If so, then this JVM has already opened the file, so an exception is thrown. Otherwise, the file is added to the set once it has been opened.

The second check in the constructor is to ensure that the file is a multiple of the block size. If not, then it tries to truncate the file to an appropriate length.

Truncation is an awkward operation for a mapped file, as a mapping could still exist from the file being opened and subsequently closed earlier in the lifecycle of the program. If this is the case, an IOException will be thrown, and the mapping must be removed. Running the garbage collector is usually enough to clean up any unused file mappings, but on Windows it is necessary to run the collector several times for this technique to work.

The code to do this looks like:
for (;;) { 
try {
fc.truncate(fileSize);
break;
} catch (IOException ex) {
if (retries-- == 0) {
logger.error(
"Could not truncate file: \"" + file +
"\" to size: " + fileSize, ex
);
throw ex;
}
System.gc();
try { Thread.sleep(100); } catch (InterruptedException ie) { }
System.runFinalization();
}
}
This technique is duplicated in clear(), close() and delete().

Factory Methods

The openBlockFile() methods each take an IOType to determine which of the implementing classes to return an instance of. However, these methods also check the tucana.xa.forceIOType system property to see if this value should be overridden. If the property is set to mapped or explicit then that will dictate if MappedBlockFile or IOBlockFile should be used. Otherwise, the requested IOType will be used.

It should be noted that it is unusual for an abstract class to have information about its implementing subclasses like this. If another type of implementation were to be added, then this super class would need to be updated as well. This sort of dependency is undesirable in object oriented code, but the benefits here were deemed to make it worthwhile.

Opened Files

The constructor for this class does not open the file, but rather calls the ensureOpen() method. This method makes sure that a file is open, and if it is not, then it attempts to perform the open operation. The operation is called in this way to try to cover occasions where a problem occurred (such as a "rollback" operation), and there is some code in the method to handle these conditions.

Several other methods in AbstractBlockFile also call ensureOpen before performing file operations, in case anything has happened to close access to the file.

Note also, that access to ensureOpen and the isOpen set is all synchronized.

Tuesday, March 07, 2006

BlockFile

BlockFile is an interface for accessing files full of identically sized "blocks" of data. This interface is like IntFile, in that they both describe the abstract methods for accessing their respective files, and both have two concrete implementations: one for mapped file access, and the other for normal I/O access. In the case of BlockFile these are MappedBlockFile and IOBlockFile.

Unlike IntFile, the concrete implementations for BlockFile share a significant portion of their code. This is shared between the two classes by having them both extend an abstract class called AbstractBlockFile, which in turn implements BlockFile. So MappedBlockFile and IOBlockFile do not implement BlockFile directly.

Redundant Interfaces

This arrangement makes BlockFile superfluous, since AbstractBlockFile can be used in every context where BlockFile might have been. However, using an interface in this way is a well established technique in Kowari. It allows for the rapid development of alternative implementations as the need arises. It also permits an object that fulfils a number of roles to implement more than one interface. These advantages both help the codebase stay flexible for ongoing development.

Finally, these interfaces, redundant or not, provide a consistent way of documenting the method of using a class (its platonic "interface", as opposed to the grammatical construct in the Java language).

Records

The point of a BlockFile is to store a series of records of identical structure in a file. Each record will be exactly the same size. The data structure within that record is left up to the caller. Kowari uses BlockFiles to store numerous data types, including strings (of defined lengths), numbers, tree nodes, and linked lists.

Files which hold records like this are essential to scalable systems, as data from anywhere in the system can be found and modified rapidly. Variable length structures can be more space efficient, but require that a file be traversed before data can be found, and modification is often impossible. Variable length structures can be indexed for rapid retrieval, but modification is often impractical.

As a side note, the expected replacement for the Kowari file system will use variable length structures, but with inherent indexing to make traversal faster than it currently is. The problem with modifying data will still exist, but this will be managed in a linear annotation queue, which will be merged back in to the main data silently in the background. The result should demonstrate much faster read and write operations, particularly for terabyte scale files.

Methods

setNrBlocks() andgetNrBlocks() are used to set and get the length of the block file. Since the file may only be viewed as a collection of blocks, then its length should only be measured in units of blocks.

Like all the other file-manipulating classes, clear() is used to remove all data from the file, and set it to zero length. force() is used to block the process until all data is written to disk. unmap() attempts to remove any existing file mappings, close() closes the file, and delete() closes the file and removes it from the disk. See IntFile for a more detailed explanation.

The remaining methods are all for manipulating the blocks within this file.

Blocks

The Block class was created to abstract away access to the records stored within a BlockFile. The block is always tied back to its location on disk, so any operations on that block will always be to the specific location on disk. This is important for memory mapped files, as it means that a write to a block will be immediately reflected on disk (though disk I/O can be delayed by the operating system) without performing any explicit write operations with the block.

Implementations of BlockFile are also expected to manage Blocks through an object pool. This was particularly important in earlier JVMs, where object allocation and garbage collection were expensive operations. Fortunately, recent versions of Java no longer suffer these effects, and an object pool can even slow a process down.

The other reason to use an object pool for blocks is to prevent the need to re-read any blocks still in memory. This is not an issue for memory mapped files, but if the blocks are accessed with read/write operations then they need to be initialized by reading from disk. The object pool for blocks will re-use a block associated with a particular file address whenever one is available. If one is not available, then the standard pool technique of re-initializing an existing object is used. With Java 1.5, it is this last operation which may be discarded.

As a consequence of the above, the remaining BlockFile methods all accept an ObjectPool as a parameter. This may appear unnecessary, since a pool would usually be associated with an instance of the BlockFile class, but in this case there are several pools which may be used, depending on the needs of the calling code.

Block Methods

allocateBlock() is used to get a Block for a particular file position, defined by a blockId parameter, but not to read the file for any existing data. This is for new positions in the file where the data was previously unused. There is no reason to read the data at this point, and the read operation will be time consuming. Writing the resulting block will put its contents into the file at the given block ID.

readBlock is similar to allocateBlock, but the block is already expected to have data in it. The Block object is allocated and associated with the appropriate point in the file, and data is read into it.

Since read operations are never explicit in memory mapped files, the allocateBlock() and readBlock() methods are identical in the MappedBlockFile class.

Any blocks obtained from allocateBlock() and readBlock() should be passed to releaseBlock() once they have been finished with. This allows the block to be put back into the object pool. While not strictly necessary, it should improve performance.

writeBlock() writes the contents of a Block's buffer to that Block's location on disk. This performs a write() in IOBlockFile and is an empty operation (a no-op) in MappedBlockFile.

The copyBlock() method sets a new block ID on a given Block. This now means that the same data is set for a new block. On IOBlockFile this is a simple operation, as the buffer remains unchanged. However, on MappedBlockFile the contents of the first block have to be copied into the new block's position. This is unusual, since MappedBlockFile usually does less work in most of its methods.

writeBlock simply writes a block's buffer back to disk. This is a no-op in MappedBlockFile.

modifyBlock() and freeBlock() should be removed from this class, as they are operations which are designed for ManagedBlockFile. This class provides a layer of abstraction over BlockFile and will be described later. The only implementations of these methods is in AbstractBlockFile where an UnsupportedOperationException is thrown.

Block File Types

The BlockFile interface also includes a static inner class which defines an enumeration of the types of BlockFile. This is the standard enumeration pattern used before Java 1.5.

The defined types are:
MAPPED
The file is memory mapped. Reading and writing are implicit, and the read/write methods perform no function.
EXPLICIT
The file is accessed with the standard read/write mechanism.
AUTO
The file is accessed by memory mapping if the operating system supports a 64 bit address space. Otherwise file access is explicit.
DEFAULT
This is set to AUTO, unless the value is overwritten at the command line. The override is performed by defining tucana.xa.defaultIOType as mapped, explicit or auto.
The code for determining this setup is in the static initializer of the enumeration class.

This enumeration class is used by the factory methods in AbstractBlockFile to determine which concrete implementation should be instantiated.

Thursday, March 02, 2006

Posts

I've noticed that my new blog has slightly different options than my old one. For instance, the new blog has a "title" that I can fill in for every post, but this blog doesn't. I wonder why the difference, and if it's possible to upgrade?

Also, I've been using <h3> tags in the new blog, to match the template I'm using there. I notice that <strong> doesn't do a lot in the current stylesheet for this blog, so I decided to trial using <h3> here as well.

MappedIntFile

The natural approach to mapping a file is to simply map the entire file from beginning to end. Any accesses to this file become simple get() and put() operations, which in turn get passed into direct memory accesses.

In Java this suffers from 2 major problems. The first is the already mentioned issue of the 2GB limit, even for 64 bit operating systems.

The second problem is the inherent difficulties in remapping the file when it grows, since it is impossible to guarantee unmapping of a file. A new mapping plus the old one may have a combined total that is larger than the available address space, so it is impossible to have both in existence at once. Another issue, is that remapping a file that has grown is a very expensive operation, particularly when the garbage collector will be repeatedly run in an attempt to clean up older file mappings.

A way to avoid all of these issues is to break the mapping into a series of smaller consecutive mappings:
  • Even though the 2GB limit exists for a single mapping, a 64 bit system may map up to hundred of GB of files.
  • Expanding a file just means a new mapping goes onto the end of the file. For this reason, files get expanded in "chunks". This means that the old mappings hang around and don't conflict with the new mapping, and not crowding out memory.
  • Not cleaning up the old mappings means that there is no overhead in cleaning up the mappings, particularly the expensive garbage collecting.
These mappings are all managed in an array of MappedByteBuffer called (funnily enough) mappedByteBuffers. Enabling access to the same data in ints or longs is a pair of associated arrays labeled intBuffers and longBuffers.

The use of these arrays of buffers makes everything more complex in this file, but not excessively so. The constructor has to calculate the number of buffers needed to cover the entire file, and sets up the arrays to hold them. It then passes the work on to a method called mapFile(). This method loops through the array of buffers, and for each element it sets up the mappings to the appropriate areas of the file. This method is also called when a file is expanded, in which case it copies all the existing mappings from the original array, and appends the new mappings to the end of the new array. Any failed attempts at mapping will result in the garbage collector being run, and the mapping attempted once more. This process is in a loop that will run 10 times (yes, that's a magic number that seems to work well. It should probably be configurable).

A mapped file can be expanded or contracted by calling setSize(). There are currently no use cases to shrink a file, except for cases where the file is shrunk to zero size (with clear()), so smaller sizes than the current size are ignored. Once the size is determined, mapFile() is called again to update the mapping arrays and create the new mappings.

Each of the get and put methods first need to determine the correct mapping to use (by taking the absolute offset divided by the size of the mappings) and the offset into the mapping (by taking the absolute offset mod the size of the mappings). Does this remind any Intel "protected mode" programmers of selectors and offsets? :-)

Once the correct offset within the specified mapping is found, then the access goes ahead as normal.

Some of this may sound like significant overhead, but it is not. The majority of the extra work is simple integer arithmetic, which will be performed in parallel to other operations on most modern processors.

The result is a MappedIntFile which can be expanded while mapping a large portion of memory, can handle more than 2GB, and is as fast or faster on all operations than a single large mapping would have been.