Sunday, February 26, 2006

Grammar
The week was filled with lots of extra-curricula stuff, mostly (though not entirely) due to a visit from the client whose project I'm working on. But by the end of the week I finally had the grammar code parsing a lot of straight forward English into RDF. I also have a few "hacks" to make it a little more RDF friendly, such as picking up prepositions in predicates.

For instance, when I first parsed, "The quick brown fox jumps over the lazy dog." I was getting a subject of fox a predicate of jumps, and an object of dog. This says a something a bit different to the original intent. By picking up the preposition for the adverbial phrase, I instead get a predicate of jumps over, which is what I wanted.

It's still rudimentary, but it's very cool to stick in natural English and get out sensible RDF. I'd love to open source this stuff, but I did it on the company's dime, so it's their call. Probably not, but they want to get more involved with OSS, so maybe.

On the other hand, it wouldn't hurt if the code were never open sourced. I didn't really know what I was doing when hacking the grammar code, so it could be a lot prettier (and more extensible). Releasing dirty code into the wild can just be a recipe for embarrassment. :-)

Advanced Degrees
DavidW posted his result on the What Advanced Degree Should You Get test. OK, so these tests are far from giving a definitive portrayal, but I couldn't fight the temptation to fill it in. What do you know? I got:

You Should Get a PhD in Science (like chemistry, math, or engineering)

You're both smart and innovative when it comes to ideas.
Maybe you'll find a cure for cancer - or develop the latest underground drug.

Not really surprising. Maybe it's telling me to hurry up with this OWL/RDF thing and get back to my Physics postgraduate study (I never intended to be away from it for this long).

Personal
Anne has been keeping a blog about our recent move to Chicago. I've had some of my own thoughts and opinions since getting here, so I finally decided to keep a personal blog about it.

It won't be interesting to anyone who doesn't know me, but I thought I'd mention it in case any of my family ever read these posts. :-)

(Besides, an extra link never hurt a Google ranking. Oh wait... it did).

Tuesday, February 21, 2006

Chicago PD
I was going to complain cathartically tonight, but Anne's done it for me. I've started going to the gym again tonight, so that's helped me feel a bit better, despite my lack of catharsis.

We have a chain on the door now, so hopefully we'll have no more incidents. I'm still feeling paranoid though.

Bottom Up
There is a lot to be said for describing a system from the top down. The overall structure comes into view very quickly, and there is a concentration on the general architectural concepts which are so important for understanding and modifying the code.

However, I have found that describing Kowari at a high level always seems to lead to questions about the details. There are concepts that people seem to struggle with until they see how all the details are managed. This encourages me to approach the system from the ground up. An obvious advantage of this approach is that there are no dependencies on unexplained systems. Conversely, a lot of ground has to be covered before the overall structure comes into view.

There are also a number of operations at the bottom levels which are created to support higher level concepts. While the operations are easy to follow, the need for them may be unclear until the higher level code is viewed.

Possibly the best compromise is to start at the top, explain the requisite lower-level details as needed, and to return to the top whenever possible. My main problem here is that the theme of the discussion jumps around a lot, I may also end up discussing details which people didn't need, while skipping those which were more interesting.

In the end I've decided to go bottom-up. At least there I can be consistent with this approach, and I don't have to be concerned about missing details. More importantly, once I start getting higher up in the architecture, it will be possible to refer back and forth as required (that's an advantage of written explanations over spoken ones). The one piece of architecture that I feel is important to know at this level is the Phase Tree design. This was covered in my last entry.

The descriptions I'll provide for the lower level classes should be considered a supplement to the Javadoc. If anyone would like to see more info on anything, then please let me know.

I'll start in the org.kowari.util package.

IntFile
This class provides the semantics of an array of 64 bit long values, backed by a file. It has operations to set the size of the array (setSize(int)), and to set and get values from any position from 0 to the end of the array (putLong(long, long) and getLong(long)).

A less used set of options, are the getters and setters which can be used to treat the structure as an array of 32 bit int values (putInt(long, int) and getInt(long)), or 8 bit byte values(putByte(long, byte) and getByte(long)).

This class also provides storage for an array of "unsigned" 32 bit integers. This is handy for file access, and managing some data types. However, unsigned values are not directly supported in Java (which the exception of the char type). To manage them, they are passed around in 64 bit longs so they get handled correctly, but they are still stored on disk in a 32 bit pattern. The methods here are called putUInt(long, long) and getUInt(long).

Concrete Classes
IntFile is an abstract class, and implements minimal functionality. The real work is done in a pair of concrete classes called ExplicitIntFile and MappedIntFile.

MappedIntFile uses the New IO (NIO) functionality introduced in Java 1.4. It memory maps a file, allowing the underlying operating system to manage reading and writing of the required memory buffers. This leverages the efficient management of disk and memory that is essential in modern operating systems, and avoids the traditional copy to or from additional memory buffers.

Unfortunately, memory mapping files in this way is restricted by the size of the address space of the process. On a 32 bit system, this creates a maximum limit of 4GB, though the program and the operating system must use some of this space. 64 bit systems don't do much better, as many 64 bit operating systems impose limits within their address space for various operational reasons. In Java, the addresses for NIO are all 32 bit int values. Since this is a signed value (the sign using up one bit), the maximum size for a single file mapping is 2GB.

All this means that if a Kowari instance needs more space, it has to revert to using standard read/write operations for accessing the files. This is the purpose of the ExplicitIntFile class. This class implementation of each of the put/get methods calls down to read/write methods on the underlying java.nio.channels.FileChannel object. This implementation is not as fast as MappedIntFile, but it can operate on much larger files, particularly when address space is at a premium.

More IntFile
The constructors for each of the concrete implementations of IntFile are all default (not public). Instead, the IntFile class contains factory methods which instantiate the appropriate concrete class according to requirements.

The factory methods are all called open(...), and accept either a String or a java.io.File object. They will open an existing file when one exists, or create a new file otherwise.

First of all, the system properties are checked for a value called "tucana.xa.forceIOType". If it doesn't exist, then it defaults to using MappedIntFile. Otherwise, it makes a choice based on the expected values of "mapped" or "explicit". Any other value will fall back to MappedIntFile and give a warning in the log.

Unfortunately, we have found a bug that occasionally manifests in ExplicitIntFile. DavidM tracked it down, but it is tough to reproduce. I don't have the details, but on hearing this I audited this class, and it all appears correct. Until we have a solution to this problem, the factory method is temporarily using MappedIntFile in all cases (this has been a problem for some users, and needs fixing).

Forcing, Clearing, and Deleting
The remaining methods on IntFile are for managing the file, rather than the data in it.

force() is used to ensure that any data written in the putXXX(long, XXX) operations has been written to disk. This relies on the operating system for the guarantee of completeness. This is relevant to both mapped and explicit IO, as both are affected by write-behind caching. This operation is essential for data integrity, when the system needs to know that all files have reached some fixed state.

clear() leaves the file open, but truncates it to zero length, thereby deleting the entire contents of the array.

close() simply releases the resources used for managing the file, and is primarily used during shutdown of the system. delete() also releases all the resources, but then deletes the file as well.

The final method to consider is unmap(). This method is only relevant to the MappedIntFile class, but it must be called regardless of the implementing class used. This is because any calling code cannot know which implementation of IntFile is being used (this is intentionally hidden, so it can be swapped with ease).

When unmap()is called on MappedIntFile, all the references used for the mapping are explicitly set to null. This allows the Java garbage collector to find the mappings and remove them, thereby freeing up address space.

This is an important operation, but it is difficult to enforce. Java does not permit explicit unmapping of files, since allowing it would permit access to memory which is not allocated (a General Protection Fault on Windows, a segfault on x86 Linux, and a bus error on Sparc Linux and Mac OS X). The closest we can come to forcing this behavior is to make the mapping available to be garbage collected, and run the garbage collector several times until the mapping has been cleaned up. This actually works on most operating systems, but needs to be iterated much more on Windows before it works.

Late Night
There are some specific details in MappedIntFile that need to be addressed, but I'll have to leave here and get some sleep. Hopefully I won't be woken by the Police tonight...

Thursday, February 16, 2006

Expenses
No blogging last night, as I was working on expense reports. Somehow I doubt that work will appreciate that I had to stay up to 1am, but they'd certainly notice if I hadn't. Even so, I had to spend more time on them this afternoon.

As if that weren't enough, I've also filled in all the paperwork required to become a full-time employee. It's a little scary when you have no idea of the tax system in this country. It also doesn't help that I don't have an SSN.

I applied for an SSN last week (my first day in town), but they told me that USCIS (formerly the INS) have not yet entered our names into the system. Apparently that takes 10 working days. Getting an SSN after that will take another 10 working days. So it will take me a month to get the one piece of ID needed to do practically ANYTHING in this country.

It seems a little weird that it's this hard, given that every person who is given a work visa here will require an SSN. So why not process it all together and hand out the SSN with the visa? Not to mention that some official documents require a Resident Alien number, but I have to apply for that separately as well! I'm not sure where to get it either. Possibly USCIS.

In the meantime, Anne can't get an SSN because she's not entitled to work. This is despite the fact that her visa does allow her to work! It turns out that she has to go back to USCIS and apply for permission to work. Once she has that she can apply for her own SSN.

I'd hate to know what will happen to us if we have to talk to a doctor. We're insured, but won't they ask for an SSN?

Bus Error
It turns out that I was a little sloppy with some recent JNI code. If my Dictionary class failed to load in C, then it returned a null, as documented. However, I blithely stored this pointer and went on to use it.

Until now the Dictionary class always loaded, because I always made sure the dictionary files were available. But yesterday I started running a new set of tests in a separate directory, and they immediately reported: bus error.

I forgot how undescriptive a fault in C can be. :-)

It shouldn't have taken long to find this, but I used Eclipse to trace where the fault was occurring. While the GUI is slower on a Mac, for some reason the whole system brought my system to its knees. The trackpad was so jerky and intermittent that I started to worry that it would need fixing. It was only after tolerating this for an hour that I realized that the problem was not the trackpad, but that the machine was struggling too much. Shutting down Eclipse fixed the problem.

I'd be tempted to continue working from the command line, along with VIM, but Eclipse can't stand me fixing files outside of its domain. Why should it care?

At least I discovered the problem easily enough, and fixed it by bringing the dictionary files into the test area. I've also moved into the code and I now throw exceptions when a C library fails to initialize. I'm disappointed I hadn't done this in the first place, but at least I'll remember now (once bitten, twice shy).

BTW, has anyone else noticed that Intel (and x86 compatible) based systems report a segfault when dereferencing 0, while Apple and Sparc systems report a bus error? I always think of null dereferences as being segfaults, so I used to get confused when I saw bus errors like this. I wonder why there's a difference.

Tuesday, February 14, 2006

Oracle
After being aware of it for along time, today I finally had a brief look at Oracle's RDF support in 10g. I was really hoping that Oracle would bring their experience in developing database structures to the RDF domain, creating something very fast and scalable.

Unfortunately, it appears that 10g implements RDF as a schema in a set of standard relational tables, and have wrapped access to the system within Sesame interfaces, including SAIL. There's nothing wrong with building a Sesame system with an Oracle backend, rather than MySQL, but this isn't the Oracle system that I was hoping for. It doesn't bring in the extra efficiencies needed to make RDF really move. After all, RDF has a strict shape to the data, while a RDBMS needs to handle data of all kinds of shapes. This is why Kowari has such fast load times (when configured correctly).

Interestingly, the paper which describes the details of the system references Kowari. I was surprised at this when I read marketing phrases like:
Oracle Spatial10g release 2 introduces the industry's first open, scalable, secure and reliable data management platform for RDF-based applications.

Kowari was open, scalable, and reliable (to the best of my knowledge), and TKS was secure (one of the reasons for buying the commercial system over the Open Source Kowari).

Another quote says:
A key feature of RDF storage in Oracle is that nodes are stored only once -– regardless of the number of times they participate in triples.

The wording here suggests that Oracle is unusual in this regard, but almost all the RDF systems I am aware of share this feature, including Kowari. Perhaps the multiple indexing in Kowari caused some confusion here?

It appears that RDF support by Oracle has been implemented at the highest layers by RDBMS programmers. While it undoubtedly works, I'm disappointed that they haven't implemented RDF at a lower level. Still, it would be interesting to see how many triples a second it can load.

Free Lists
Now that I've discussed the principle behind phases, the next step is to discuss the classes which support this process. At the lowest level this is the class called FreeList. This class manages resources which are allocated and/or released in various phases. It enables new resources to be created as needed, and freed resources to be re-used efficiently.

The kinds of resources managed by the FreeList class are all the fixed-length records within data files, and also the numeric identifiers used for RDF nodes. The name of the class is an historical holdover from when it was simply used to hold a list of items which had been allocated and then freed. It does a lot more now.

FreeList sits over the top of two other classes called BlockFile and IntFile, both of which are relatively easy to describe. Unfortunately, FreeList itself will take me some time to write about, and a late night is not the time to start. So I'll get into it in my next entry.

Monday, February 13, 2006

Files
With work and the move keeping me busy it's been a while since I last looked inside Kowari. To kind of ease myself back into it, I thought I should finally get around to writing some notes on the Kowari file system.

Phases
At all levels of abstraction inside the Kowari file system, the concept of phases keeps appearing. Phases are a way to keep track of a constant state in the system, regardless of any write activity that may be occurring. They are also used to serialize write access from different connections. While this does enforce consistency in the data on disk, the inability to allow multiple parallel writers is a current weakness in Kowari. (This is the principle reason the XA2 file system has been proposed).

Phases were borrowed from the TUX2 filesystem. In this approach, all data is laid out in a tree, with pointers going down from parent nodes to child nodes. This is a common pattern for data storage. (When I get OmniGraffle working again I'll put in a simple diagram of a tree here).

When a change needs to happen to a node in the tree, then that node is first copied to one side. The new copy contains pointers to the same children as the original node. Then each ancestor of this node is copied recursively, all the way up to the root of the tree. Each new parent node will refer to the copied child, but all the other children will be the same as the original node. The result is a pair of roots to two trees which are indistinguishable from each other, and which share many of the same nodes. (This is where I really need the diagram).

At this point, the new nodes can be modified without affecting the old tree. If modifications are required on any nodes in the old tree, these nodes can also be copied, along with all the parents up to an already copied parent. As soon as a node is found that has already been copied, then the existing copy can be adjusted to point to the new child node.

Once the write operation to the tree is complete, the old root will refer to the tree before the write, while the new root will refer to the tree after the write. The old tree was never touched, so it is guaranteed to be completely consistent. Once the new tree has been completely written, and is in a consistent state, then both trees will be completely consistent. That means that it is safe to use either root of the tree, without fear of power blackouts, OS crashes, etc.

At this point, a single number which holds the address for the current root of the tree can be updated. This can be performed atomically on any hard drive. Even if the power fails, every hard drive available today has enough capacitance to complete the write operation of a single block.

To guarantee an atomic write like this (ie. to guarantee a consistent tree on disk) the host operating system has to allow a force operation to disk. This is supported in Java, which in turn relies on the file system API of the underlying operating system. It is concerning that some operating systems have been known to postpone "force" operations (eg. Mac OSX), but this is a matter that is out of the control of user level code. Fortunately, the conditions required for this to cause a problem are far less likely to occur than the likelihood of hardware failure, so the risk is manageable.

Note also that the new root of the tree can be abandoned at any time with no consequences for the original tree. All that is required then is the ability to clean up or recover all the copied nodes created for the abandoned tree.

A new root to a tree as described here is called a new phase. Selecting a phase to be the new primary root of the tree, is known as committing the phase. Committed phases are always in a consistent state, removing the need to journal modifications.

Another feature of phases is that at any point in time a phase may be "kept" for reading. Any future modifications will start up a new phase (creating a new root to the tree). This means that it is possible to keep multiple phases active at any time, with each phase representing a "snapshot" of the data in the tree at the time that phase was kept. This is important for multiple readers to be able to get the latest available data while a long write operation is continuing. To ensure that the snapshots are kept consistent, they must not be modified. This means that only the most recent phase may be written to.

This describes Kowari's inability to allow multiple writers. However, this strategy is done to ensure that Kowari has a completely robust filesystem that is efficiently capable of withstanding any error, including the sudden removal of power. While speed and robustness are important, enterprise systems also require multiple concurrent writers. This has led to the plans for XA2.

XA2 will allow for multiple writers, but this in turn will require a form of journaling. Journaling is less efficient than phase trees, but we expect to gain in other areas of efficiency, making the overall performance even better. The system is more complex than the phase trees discussed here, so I will leave discussion of this for another time.

Thursday, February 09, 2006

Landed
I resisted the title "Sweet Home Chicago", as I'm sure everyone who has ever moved here has said that. :-)

We got through the flight successfully, though we didn't get much sleep. It's harder to sleep when looking after children who aren't sleeping much either. Luc was amazingly well behaved. It was only in the last couple of hours that he started getting hyperactive (maniacal laughter, treating everything as a game, getting into everything, etc). Fortunately there were no tears or tantrums. Nic just slept.

Membership of the Qantas Club was wonderful at LAX. We got internet access (so I could confirm the flight time with JohnN at work), but more importantly, they had a room specifically for children. So we got a little private space, and Luc got to play with Tonka toys while watching Nickelodeon. Compared to sitting on a crowded concourse it was a little slice of heaven.

JohnN picked us up at O'Hare airport, and drove us into town. We had a lot of bags, so I'm glad we didn't have to arrange our own transport. I was also glad to have someone else driving, as it started snowing when we arrived. I've been in snow, but almost never seen it falling, so it was nice to see (though I'm sure the locals didn't enjoy it).

The apartment is minimally set up, which has been enough for the first day, but we need to get out and buy things like shelves if we want to stop living out of suitcases. We also need to buy some good cold-weather clothing for the boys, as they don't have enough. Frustratingly, most places have sold out, as they are bringing in the new season in preparation for Spring.

So yesterday and today are being spent shopping for essentials. I'm just grateful I don't have to deal with work, as I'm still suffering pretty badly from jet lag.

Wireless
The apartment came with cable access for the internet, but the company hadn't got around to brining over a wireless router. Fortunately, I had my AirPort Express with me, plus an extra US power adapter, so I plugged it in expecting it to all work.

While connecting the modem directly to my notebook worked fine, I couldn't make it work with the AirPort. The notebooks could connect to the AirPort just fine, but all routing stopped at that point, and there was no DNS.

I tried changing every setting I could think of, including setting the IP details statically to the details obtained when I plugged the notebook in directly. But the AirPort kept acting as if the ethernet was not there at all. I finally reset everything to what I thought it should have been (the settings I'd originally used, and then disconnected power to both the AirPort and the cable modem. 30 seconds later everything worked.

This isn't the first time I've seen certain devices fail connecting to a network, but not others. I wonder who was at fault in the protocol here... the AirPort or the cable modem?

Monday, February 06, 2006

Stress and Moving
This is probably my last entry from Australia for the time being. We've just spent the last few days in a mad scramble trying to finalize the things we thought were already done, or easier to complete than they really were. I hope we're near the end of it.

Tomorrow morning will see the moving company arrive, and take almost everything we own. We won't see any of it again for at least 8 weeks. We were told 8-14 weeks, so I'm expecting that it will really be around 16 weeks. Then off to a hotel, and onto a flight the next morning. It's a 12:10pm flight to LA... meaning that we have to check in no later than 9:10am! Sounds excessively early to me, but I suppose they need that to deal with the conditions set out at LAX.

In the meantime, we've been running all over town managing accounts, banks, and insurances. I've also had to spend a significant sum on travel insurance to cover us until my medical insurance becomes valid in April. This wasn't easy, as most providers won't handle people taking one-way trips. It had to be travel insurance, as we needed to be covered in case of problems on the trip, particularly if something happened to my notebook computer. Unfortunately, the only "one-way" insurance I found does not cover computers (only cameras), so I had to spend another couple of hundred to cover the computer as well. It all adds up, and I don't expect to see my credit card reimbursed soon. I'm going to be in a lot of debt for a while on this one.

Luc seems to be dealing with everything very well, and is still happy most of the time. I'm very grateful for this. He still needs his attention, so it can get difficult on occasion. Nic is as happy as ever. Anne and I are both very tired, and I'm looking forward to it all being over.

We've been told that we'll have internet access when we get there, so here's hoping I can get online quickly once we do.

Friday, February 03, 2006

Drowning
The visas finally arrived, and now I can do all the last minute things I have to get done before leaving Australia for good. The power will go off at the appointed time, as will gas, phone, internet, etc. Annoyingly, I can't cancel my mobile phone until I want it cut off, so I'll have to wait until the appointed time. I'll do that from overseas, as I'd like a bit of overlap so I can continue to make calls.

More frustrating is my health insurance. The Australian government gives a small subsidy to private health insurance but this is reduced if you spend any time after turning 30 without private cover. Because of this, I can only cancel my account without future penalties if I show my travel itinerary at an office. I have the itinerary now, but I'd rather not have to travel in.

My main problem at the moment is that I'm swamped with paperwork. Flights (obviously), visas (done, thank goodness), mail redirection (sorted), insurance (travel, landlords, shipping), mortgage (refinanced because the CBA would be a nightmare to deal with from the USA - they're bad enough in Australia), shipping forms, customs declarations, car sale, and rental documents. In the midst of this, our printer has decided to start mangling paper, so all those PDF documents I've been sent to fill in aren't helping.

Most of it takes a little time, but my "Supplemental Declaration for Unaccompanied Personal and Household Effects" form has one sticking point. It requires my "Resident Alien No." only I know nothing about this. My visa isn't forthcoming, and I have no other documents.

The visa has a 14 digit "Control Number" on it, but that seems to be a different number altogether. I found a document on the web (an application for info form at Sandia) which asks for both the Visa Control Number and the Resident Alien No. There is also an 8 digit number in red on the visa, but it's completely unlabelled, so I don't know what it might be. Google was surprisingly unhelpful in this regard. The best I've been able to do os to find the same form provided by other shipping companies, with all of them saying that the questions should be "self-explanatory". Great.

While writing some of this, my brother has come online and has started explaining some of it. I knew that I have to apply for an SSN once I get there (why not make it a part of the visa process?), but I didn't know that I also need to get a "permanent residence card". He also told me that he didn't get his "Resident Alien Number" until he'd been living there for a few months.

I'm starting to get confused about how I can actually move to this country. Is it possible?

Work
Unsurprisingly, work has slowed right down now, though I'm still trying to keep my hand in.

Someone had a problem the other day about using decision trees. They weren't scaling all that well (they scale by log(n) obviously), and he was trying to pick up some scalability from somewhere. He thought of breaking the tree into several subtrees, which might make the structure more manageable, but still requires the same number of decisions.

I started thinking about the first decision, which determines the sub-tree to go to. It occurred to me that several layers of decisions could be merged into a single "hashcode", which then finds the tree through a hash map. This tries to trade time for memory (less time for decisions, but more memory in the hash table). It's possible to go this route, but it requires careful merging of the data into the hashcode. If each of the elements going into the code require a test of some sort, then the number of tests to find the final data will not change. It's one of those things where you really need to see the shape of the data involved to see whether or not a particular optimization will work.

Merging data into a single "hash" is kind of like representing each of the elements of the data as separate dimensions. The hash code then leads to a point in N-dimensional (N-D) space representing the data you have so far. Using the hashcode to find the decision tree to use, means that these trees represent the work to be done for various regions of N-D space. This then brought me back to neural networks, as this is similar to how they are modeled.

This made me realize that in some ways a hash table can act like a neural network. The main difference here is that neural networks are forgiving for unexpected data. Hashtables can only be made to work this way if they can cover all allowable points in space. If the space isn't sparse, then that means that the hashtable might as well be an array - just so long as there is a way to map co-ordinates to something.

Anyway, it has me thinking that I might have to pull out some neural network algorithms, dust them off, and have a go at applying them to some of our problems in document categorization. I haven't used them in years, so it will be a real blast of retro for me.

RDF
In the meantime, it's about time that I look at RDF again. I still plan to use Kowari (since it's under the MPL, and that can't be changed), but while NGC are making things awkward I've been considering other routes. I'm tempted to use Sesame for a while, to learn more about it if nothing else. However, I'm not sure that I can make my thesis work in this framework, so I still need to look at other options.

I've been thinking of having a go at DavidM's skiplist code, to see if I can use it as the core to a new storage layer. If that works out, then I can start building some of the other layers on top, avoiding the problems that have had to stay in Kowari for historical reasons. Many of the top layers in Kowari were developed (as open source software) in the last year, so these can be transplanted easily.

I don't pretend that this would turn into a fully fledged RDF store (at least, not without a lot of time and help), but it could be a useful exercise. It would be enough for what I want to do OWL inferencing anyway. It might also go a bit quicker than Kowari, since I've already done a lot of this stuff once before! :-)

I'd rather work with Kowari (for the time being), but at least this way I don't have to worry about NGC interfering. It also starts introducing the skip list code that Kowari has needed for some time.

I'll see if I get any time for this in the coming week. It may be hard. I can't even do much on the plane, as we will be watching two young children. Still, it doesn't hurt to keep thinking these things over.