tag:blogger.com,1999:blog-68485742024-03-17T04:16:21.853-05:00Working notesSimple personal notesPaulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.comBlogger399125tag:blogger.com,1999:blog-6848574.post-84964646644108015482015-02-17T22:45:00.004-06:002015-02-17T22:45:43.378-06:00Tweaking the Power LimitAfter posting the power-limit function earlier I found myself thinking about the inelegance of the expression. I like the approach, but the code seems much more verbose than should be required. In particular, the <span style="font-family: Courier New, Courier, monospace;">(partial apply not=)</span> expression seemed too long for something so simple.<br />
<br />
As a reminder, here is my last iteration on the function:<br />
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power-limit [f a]<br /> (let [s (iterate f a)]<br /> (ffirst<br /> (drop-while (partial apply not=)<br /> (map vector s (rest s))))))</span></span></blockquote>
The first thing that occurred to me here is the use of <span style="font-family: Courier New, Courier, monospace;">vector</span> to pair up the values. It's this pairing that makes the predicate for <span style="font-family: Courier New, Courier, monospace;">drop-while</span> more complex. Perhaps I should avoid the pair?<br />
<br />
The reason for this pair is so that they can be compared for equality in the next step. So why not just compare them straight away? But that would leave me with a true/false value, when I need to get the values being compared when they are equal. What I really needed was a false/value result, and this reminded me of how a set lookup works. For instance <span style="font-family: Courier New, Courier, monospace;">(#{2} 1)</span> returns nil, while <span style="font-family: Courier New, Courier, monospace;">(#{2} 2)</span> returns 2. This is what I need, so why not try it?<br />
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power-limit [f a]<br /> (let [s (iterate f a)]<br /> (first<br /> (drop-while not<br /> (map #(#{%1} %2) s (rest s))))))</span></span></blockquote>
The anonymous function in the map looks a little like line noise, but it's not too bad.<br />
<br />
Now that the result of <span style="font-family: Courier New, Courier, monospace;">map</span> is a value and not a pair, we only need <span style="font-family: Courier New, Courier, monospace;">first</span> and not <span style="font-family: Courier New, Courier, monospace;">ffirst</span>. The <span style="font-family: Courier New, Courier, monospace;">drop-while</span> predicate also gets simpler, just needing a <span style="font-family: Courier New, Courier, monospace;">not</span>, but it's still verbose. However, the <span style="font-family: Courier New, Courier, monospace;">drop-while</span> is just removing the leading nils from the results of the <span style="font-family: Courier New, Courier, monospace;">map</span>, so let's move to <span style="font-family: Courier New, Courier, monospace;">keep</span> instead:<br />
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power-limit [f a]<br /> (let [s (iterate f a)]<br /> (first<br /> (keep #(#{%1} %2) s (rest s)))))</span></span></blockquote>
So does this work?<br />
<span style="font-family: Courier New, Courier, monospace;">=> (power-limit has-fixpoint 0)</span><br />
<span style="font-family: Courier New, Courier, monospace;">ArityException Wrong number of args (3) passed to: core/keep clojure.lang.AFn.throwArity (AFn.java:429)
</span><br />
<br />
Doh! It turns out that <span style="font-family: Courier New, Courier, monospace;">keep</span> only has a single arity and does not accept multiple collections like <span style="font-family: Courier New, Courier, monospace;">map</span> does. So I'm stuck doing a <span style="font-family: Courier New, Courier, monospace;">(drop-while not ...)</span> or <span style="font-family: Courier New, Courier, monospace;">(filter identity ...)</span> or <span style="font-family: Courier New, Courier, monospace;">(remove nil? ...)</span> to wrap the map operation.<br />
<br />
The fact that I can remove all the leading nils, or all the nils (since all the nils are at the head here), made me take a step back again to think about what I'm doing. Once all the nils are out of the picture, I'm just looking for the first item in the resulting list. That's when I remembered <span style="font-family: Courier New, Courier, monospace;">some</span> which tests all the items in a list according to its predicate, returning truthy/falsey, but the way it does this is to just return the first non-falsey value its predicate gives. So I still need to wrap the <span style="font-family: Courier New, Courier, monospace;">map</span> operation, but I can drop the call to <span style="font-family: Courier New, Courier, monospace;">first</span>.<br />
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power-limit [f a]<br /> (let [s (iterate f a)]<br /> (some identity (map #(#{%1} %2) s (rest s)))))</span></span></blockquote>
Not as pretty as the very first implementation (which used loop/recur) but now it's not so verbose, so I'm feeling happier about it.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com6tag:blogger.com,1999:blog-6848574.post-72671820017215627802015-02-17T15:08:00.000-06:002015-02-17T22:03:59.693-06:00Life in ClojureA few months ago I was looking at the <a href="https://www.youtube.com/watch?v=a9xAKttWgP4">Life in APL</a> video on YouTube and it occurred to me that other than the use of mathematical symbols for functions, the approach was entirely functional. This made me wonder if it would be easier (and perhaps clearer) to do it in Clojure. My goal was to show that Clojure was just as capable, and hopefully easier than APL.<br />
<br />
I created <a href="https://github.com/quoll/life">Life in Clojure</a> to do this, and it worked out pretty well. What I need to do now is to record a REPL session for YouTube, just like the original. However, there are a couple of things that were shown in the APL video that are missing in Clojure, and I needed to fill in the gaps. It's these gaps that I've been thinking about ever since, wondering if I could/should do better.<br />
<br />
Since I don't have a video, Life in Clojure is implemented in a <a href="https://github.com/quoll/life/blob/master/src/life/demo.clj">namespace that contains a REPL session</a>. Just use "lein repl" and paste in that namespace file and it'll do everything in the same order as the APL video, with a couple of deviations. I've commented each operation with the appropriate dialog from the original APL video, so it should be easy to see the parallels. However, for anyone who wants to see it running, the final program is implemented in the <a href="https://github.com/quoll/life/blob/master/src/life/core.clj">life.core</a> namespace.<br />
<br />
<h2>
Setup</h2>
The first change between these Clojure and APL implementations is that Clojure needs to explicitly import things into the environment, while the APL video seemed to demonstrate all operations being available by default. Given that most of these were matrix operations, and APL specializes in matrix operations, then it isn't surprising that these are built in. Fortunately, all the applicable operations are available to Clojure via <a href="https://github.com/mikera/core.matrix">core.matrix</a>, so this was a simple <span style="font-family: Courier New, Courier, monospace;">require</span>, along with a project dependency. I suppose I will need to show these at the start of any video I make (if I ever make it).<br />
<br />
<h2>
Output</h2>
The APL video showed the use of a display function that prints a matrix to the console. Clojure's pretty-print will also show an m*n matrix reasonably well, and I could have chosen to use that. However, the APL code is also printing a vector, and then a matrix, where each element is an m*n matrix, and pretty-print just messes that up.<br />
<br />
The APL demo seemed to be using a library function to handle this printing, so it didn't feel like cheating to write <a href="https://github.com/quoll/life/blob/master/src/life/display.clj">my own</a>. It's a bit clunky, as it's specialized to only handle 2, 3 or 4 dimensions, but it works.<br />
<br />
The bigger issue was showing the current state of the Life playing field. APL was using a <span style="font-family: Courier New, Courier, monospace;">draw</span> function to update the contents of a text file, and opened a window that dynamically showed the contents of that file as it was updated. Using a file proxy like this made the display easy in APL, but it's hack that I did not think was worth duplicating. Instead, I created a function to convert the playing field into an image, and sent the image to the display.<br />
<br />
Creating an image from the matrix was an easy Swing-based function (just draw filled-in rectangles based on the contents of each matrix cell, and return the image). However, rendering it was a little harder.<br />
<br />
Following the APL approach, I wanted a function that would "send" the image to the screen, and forget about it. However, Swing requires an event loop, and several listener objects to allow the window to interact with the environment. These sorts of objects do not fit cleanly into a purely function program, but using proxies and closures does a reasonably good job.<br />
<br />
I was able to hide the default window parameters (width/height, etc), but had trouble with the fact that the APL program simply sends the output to the rendered file without defining it as a parameter. The main problem is that the <span style="font-family: Courier New, Courier, monospace;">open-window</span> function should return the created resource (the window) and this value should be captured and "written to" each time the life algorithm needs to re-render the playing field. I could do that, but introducing this resource handling into the main algorithm would get in the way of the "clarity" that the demo was trying to express. In the end, I opted to create a global atom (called <span style="font-family: Courier New, Courier, monospace;">canvas</span>) that gets updated with each call to <span style="font-family: Courier New, Courier, monospace;">draw</span>. I would not usually write my code this way, but it seemed to be the best way to duplicate the APL demo.<br />
<br />
Of course, not a lot of programs use Swing these days. The majority of programs I see with GUIs use a browser for the UI. So when a couple of my friends saw this app, they suggested reimplementing it in ClojureScript, and rendering in a web page. Funnily enough, the DOM ends up being an equivalent to the global canvas that I have in the Swing application, so the calling code would not need to change its approach. However, <a href="https://github.com/mikera/core.matrix/issues/81">core.matrix does not yet support ClojureScript</a>, so I haven't tried this yet.<br />
<br />
<h2>
Matrix Math/Logic Operations</h2>
One minor inconvenience in using the APL approach is that it uses the C-style idiom that the number 0 is the value for "false" and anything else is "true". The APL demo algorithm uses 1 to indicate that a cell is occupied, and then uses addition between cells to determine the values for subsequent generations. This creates an elegant shortcut for determining occupation for subsequent generations in Life, but it isn't as elegant in Clojure.<br />
<br />
Of course, Clojure truthy values identify all numbers as "true", and only nil or boolean "False" have a falsey value. Without the zero=false equivalence, then the algorithm gets more awkward. Based on that, I created a function that takes a single argument boolean function and returns an new function that returns the same result mapping false->0 and true->1. This allows matrix addition to be used as the basis of the AND and OR operations, just like in the APL demo. It works fine, but I'm uncomfortable that it currently only handles the 2 argument case for the AND and OR matrix operations. Also, I've since learned a little more about core.matrix, and I think there may be better ways to define these functions. I'll have to look into that a little more.<br />
<br />
There is also a convenience function I created which is used to map a matrix to boolean 1/0 based on each cell being equal to a given value. It's a <a href="https://github.com/quoll/life/blob/master/src/life/matrix.clj#L62">simple one-liner</a> that makes it easier to compare to the equivalent APL code but it always felt a little inelegant. However, this might just be due to the need to map booleans to numbers.<br />
<br />
The other APL function that wasn't directly available to me was the "take" of a matrix. This operation allows a 2-D "slice" from a larger matrix, but also deals with the insertion of a small matrix into a larger "empty" matrix. After some examining of core.matrix I discovered that <span style="font-family: Courier New, Courier, monospace;">shape</span> provided similar functionality, though it needs help from <span style="font-family: Courier New, Courier, monospace;">compute-matrix</span> to handle expansions. I wrapped this in the function <span style="font-family: Courier New, Courier, monospace;"><a href="https://github.com/quoll/life/blob/master/src/life/matrix.clj#L11">takeof</a></span> to duplicate how the APL demo used it.<br />
<br />
<h2>
Powers and Power Limits</h2>
Up to this point, everything was either translating a library from APL (display, the graphics output, or some simple wrappers to smooth out matrix access, and handle the integer/boolean equivalence in APL). But right at the end of the video, the APL demonstration uses a couple of functions that had no equivalents in the Clojure. These are the power function and the power-limit function.<br />
<br />
The behavior of the <span style="font-family: Courier New, Courier, monospace;">power</span> function is to take a single argument function and apply it to an argument, take the result and apply the function to that, and keep repeating the application of the function as many time as requires. The specific operation of <span style="font-family: Courier New, Courier, monospace;">power</span> is to return a function that applies the argument function iteratively like this for the defined number of times.<br />
<br />
As an example, the 5th power of <span style="font-family: Courier New, Courier, monospace;">clojure.core/inc</span> applied to 0 is:<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace;">=> (inc (inc (inc (inc (inc 0)))))<br />5</span></blockquote>
So <span style="font-family: Courier New, Courier, monospace;">(power-function inc 5)</span> returns a function that applies inc 5 times:<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace;">=> ((power-function inc 5) 3)<br />8</span></blockquote>
The <span style="font-family: Courier New, Courier, monospace;">power-limit</span> function is similar, in that it iteratively applies the provided function, however it does not stop iterating until it reaches a fixpoint value. That is, until the output of the function is equal to the input.<br />
<br />
An example function that has a fixpoint is:<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace;">(defn has-fixpoint [x] (min (inc x) 3))</span></blockquote>
This returns either the increment or 3, whichever is smaller. If this is applied iteratively, then it will always reach a value of 3 and stay there.<br />
<br />
My first approach to this naïvely executed the provided function n times:<br />
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power [f n] (case n<br /> 0 identity<br /> 1 f<br /> (fn [x]<br /> (loop [r x, i n]<br /> (if (zero? i)<br /> r<br /> (recur (f r) (dec i)))))))</span></span></blockquote>
<br />
<div class="p1">
<span class="s1">This works, but one look at it was enough to tell me I'd gone down a blind alley. <span style="font-family: Courier New, Courier, monospace;">case</span> statements always suggest that something is inelegant, and <span style="font-family: Courier New, Courier, monospace;">loop/recur</span> often means that the Clojure's laziness and seq handling are being ignored, which implies that I may have missed an abstraction. Both are a code smell.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">That's when I remembered <span style="font-family: Courier New, Courier, monospace;">clojure.core/iterate</span>. This function applies a function iteratively, returning an infinite lazy sequence of these iterations. The required power is just the nth entry in that sequence, so the power function just becomes:</span></div>
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace;"><span class="s1">(defn power [f n] </span>#(nth (iterate f %) n))</span></blockquote>
<div class="p1">
<span class="s1">Much better.</span></div>
<div class="p1">
<br /></div>
Similarly, I'd implemented the power-limit function using a loop until the input equalled the output:<br />
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power-limit [f a]<br /> (loop [a a]<br /> (let [r (f a)]<br /> (if (= r a)<br /> r<br /> (recur r)))))</span></span></blockquote>
<br />
<div class="p1">
<span class="s1">Unlike the original version of the <span style="font-family: Courier New, Courier, monospace;">power</span> function, I like this approach, since it clearly does exactly what it is defined to do (apply the function, compare input to output, and either return the output or do it again).</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">I already mentioned that loop/recur often suggests that I've missed an abstraction. This, along with neglecting to use the <span style="font-family: Courier New, Courier, monospace;">iterate</span> function for <span style="font-family: Courier New, Courier, monospace;">power</span>, had me wondering if there was a more idiomatic approach here. That's when I realized that I could pair up the iterated sequence of the function with its successor, and compare them. The result is:</span></div>
<blockquote class="tr_bq">
<span class="s1"><span style="font-family: Courier New, Courier, monospace;">(defn power-limit [f a]<br /> (let [s (iterate f a)]<br /> (ffirst<br /> (drop-while (partial apply not=)<br /> (map vector s (rest s))))))</span></span></blockquote>
<div class="p1">
<span class="s1">The iteration sequence is captured in the <span style="font-family: Courier New, Courier, monospace;">let</span> block so that it can be re-used when pairing the sequences in the <span style="font-family: Courier New, Courier, monospace;">map</span> operation. The <span style="font-family: Courier New, Courier, monospace;">vector</span> is being used here to capture each iteration with its successor in vectors of pairs. Because <span style="font-family: Courier New, Courier, monospace;">iterate</span> is lazy, the sequence of iterations is only calculated as needed, and only calculated once.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">The <span style="font-family: Courier New, Courier, monospace;">drop-while</span> drops off all pairs of values where the values are not equal. The <span style="font-family: Courier New, Courier, monospace;">not=</span> operation can take multiple arguments, so rather than pull each of the two values out, I'm using <span style="font-family: Courier New, Courier, monospace;">partial/apply</span> to provide the pair as all the arguments. The result of the <span style="font-family: Courier New, Courier, monospace;">drop-while/map</span> is a sequence of pairs of argument/results (which are all equal). The final result is to take the first value from the first pair, hence the use of <span style="font-family: Courier New, Courier, monospace;">ffirst</span>.</span></div>
<div class="p1">
<span class="s1"><br /></span></div>
<div class="p1">
<span class="s1">Conceptually, I like it more, and I think that it fits into Clojure better. However, it's less clear to read then the loop/recur solution. So I'm torn as to which one I like more.</span></div>
Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com2tag:blogger.com,1999:blog-6848574.post-47258337156909754702013-12-08T18:34:00.002-06:002013-12-08T18:34:56.477-06:00DDD on MavericksDespite everything else I'm trying to get done, I've been enjoying some of my time working on <a href="https://github.com/alandipert/gherkin">Gherkin</a>. However, after merging in a new feature the other day, I caused a regression (dammit).<div>
<br /></div>
<div>
Until now, I've been using judiciously inserted calls to <span style="font-family: Courier New, Courier, monospace;">echo</span> to trace what has been going on in Gherkin. That <i>does</i> work, but it can lead to bugs due to interfering with return values from functions, needs cleanup, needs the code to be modified each time something new needs to be inspected, and can result in a lot of unnecessary output before the required data shows up. Basically, once you get past a particular point, you really need to adopt a more scalable approach to debugging.</div>
<div>
<br /></div>
<div>
Luckily, Bash code like Gherkin can be debugged with <a href="http://bashdb.sourceforge.net/">bashdb</a>. I'm not really familiar with bashdb, so I figured I should use <a href="http://www.gnu.org/software/ddd/">DDD</a> to drive it. Like most Gnu projects, I usually try to install them with <a href="http://www.finkproject.org/">Fink</a>, and sure enough, DDD is available. However, installation failed, with an error due to an ambiguous overloaded + operator. It turns out that this is due to an update in the C++ compiler in OSX Mavericks. The code fix is trivial, though <a href="http://savannah.gnu.org/patch/?8178">the patch</a> hasn't been integrated yet.</div>
<div>
<br /></div>
<div>
Downloading DDD directly and running configure got stuck on finding the X11 libraries. I could have specified them manually, but I wasn't sure which ones fink likes to use, and the system has several available (some of them old). The correct one was /usr/X11R6/lib, but given the other dependencies of DDD I preferred to use Fink. However, until the Fink package gets updated then it won't compile and install on its own. So I had figured I should try to tweak Fink to apply the patch.</div>
<div>
<br /></div>
<div>
Increasing the verbosity level on Fink showed up a patch file that was already being applied from:</div>
<div>
<div class="p1">
/sw/fink/dists/stable/main/finkinfo/devel/ddd.patch</div>
<div class="p1">
<br /></div>
<div class="p1">
It looks like Fink packages all rely on the basic package, with a patch applied in order to fit with Fink or OS X. So all it took was for me to update this file with the one-line patch that would make DDD compile. One caveat is that Fink uses package descriptions that include checksums for various files, including the patch file. My first attempt at using the new patch reported both the expected checksum and the one that was found, so that made it easy to update the .info file.</div>
<div class="p1">
<br /></div>
<div class="p1">
If you found yourself here while trying to get Fink to install DDD, then just use these 2 files to replace the ones in your system:</div>
<div class="p1">
</div>
<div class="p1">
<a href="https://drive.google.com/file/d/0B3C5oOrqKl4UUTI3MUJHa21KUEU/edit?usp=sharing">/sw/fink/dists/stable/main/finkinfo/devel/ddd.info</a></div>
<div class="p1">
</div>
<div class="p1">
<a href="https://drive.google.com/file/d/0B3C5oOrqKl4UbDM2dDN5MGM1TEE/edit?usp=sharing">/sw/fink/dists/stable/main/finkinfo/devel/ddd.patch</a></div>
<div class="p1">
<br /></div>
<div class="p1">
If you have any sense, you'll check that the patch file does what I say it does. :)</div>
<div class="p1">
<br /></div>
<div class="p1">
Note that when you update your Fink repository, then these files should get replaced.</div>
</div>
Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-5461169936762525262013-03-03T19:02:00.001-06:002013-03-04T00:09:14.142-06:00Clojurescript and Node.jsFor a little while now I've been interested in <a href="https://developer.mozilla.org/en-US/docs/JavaScript" target="_blank">JavaScript</a>. JS was a strong theme at <a href="https://thestrangeloop.com/" target="_blank">Strangeloop</a> 2011, where it was being touted as the next big Virtual Machine. I'd only had a smattering of exposure to JS at that point, so I decided to pick up Douglas Crockford's <a href="http://shop.oreilly.com/product/9780596517748.do" target="_blank">JavaScript: The Good Parts</a>, which I quite enjoyed. I practiced what I learned on <a href="https://github.com/revelytix/robusta" target="_blank">Robusta</a>, which is both a <a href="http://www.w3.org/TR/2012/PR-sparql11-overview-20121108/" target="_blank">SPARQL</a> client library for JS, and simple UI for talking to SPARQL endpoints. It was a good learning exercise and it's proven to be useful for talking to <a href="http://jena.apache.org/" target="_blank">Jena</a>. Since then, I've thought that it might be interesting to write some projects in JS, but my interests are usually in data processing rather than on the UI front end. So the browser didn't feel like the right platform for me.<br />
<br />
When <a href="https://github.com/clojure/clojurescript" target="_blank">ClojureScript</a> came out, I was interested, but again without a specific UI project in mind I kept putting off learning about it.<br />
<br />
At this point I'd heard about <a href="http://nodejs.org/">node.js</a>, but had been misled into thinking that it was for server-side JS, which again didn't seem that interesting when the server offered so many interesting frameworks to work with (e.g. <a href="http://rubyonrails.org/" target="_blank">Rails</a>).<br />
<br />
This last Strangeloop had several talks about JS, and about the efficiency of the various engines, despite having to continue supporting some unfortunate decisions early in the language's design (or lack of design in some cases). I came away with a better understanding of how fast this system can be and started thinking about how I should try working with it.<br />
<br />
But then came <a href="http://clojure-conj.org/">Clojure/conj</a> last year and it all gelled.<br />
<br />
When I first heard <a href="http://clojure-conj.org/speakers/granger.html">Chris Granger</a> speaking I'll confess that I didn't immediately see what was going on. He was targeting Node.js for <a href="http://www.chris-granger.com/lighttable/">Light Table</a>, but as I said, I didn't realize what Node really was. (Not to mention that the social aspects of the conj had me a little sleep deprived). So it wasn't until after his talk that I commented to a colleague (@gtrakGT) that it'd be great it there was a JS engine that wasn't attached to a browser, but still had a library that let you do real world things. In retrospect, I feel like an idiot. :-)<br />
<br />
So I started on Node.<br />
<h3>
ClojureScript with Node.js</h3>
While JS has some nice features, in reality I'd much rather write Clojure, meaning that I finally had a reason to try ClojureScript. My first attempts were a bit of a bumpy ride, since I had to learn how to target ClojureScript to Node, how to access functions for getting program arguments, how to read/write with Node functions. Most of all, I had to learn that if I accidentally used a filename ending in .clj instead of .cljs then the compiler would silently fail and the program would print bizarre errors.<br />
<br />
All the same, I was impressed with the speed of starting up a ClojureScript program. I found myself wondering about how fast various operations were running, in comparison to Clojure on the JVM. This is a question I still haven't got back to, but it did set me off in some interesting directions.<br />
<br />
While driving home after the conj, the same colleague who'd told me how wrong I'd been about Node.js asked me about doing a simple-yet-expensive calculation like large factorials. It was trivial in Clojure, so I tried it with ClojureScript, and immediately discovered that ClojureScript uses JavaScript's numbers. These are actually floats, but for integers it means that they support 53 bits. Clojure automatically expands large number into the BigInteger class from Java, but JavaScript doesn't have the same thing to work with.<br />
<br />
At that point I should have considered searching for BigIntegers in JavaScript out there on the web somewhere, but I was curious about how BigIntegers were implemented, so I started looking the code for java.math.BigInteger and seeing if it would be reproduced in Clojure (using protocols, so I can make objects that look a little like the original Java objects). The bit manipulations started getting tricky, and I put it aside for a while.<br />
<br />
Then recently I had a need to pretty-print some JSON and EDN. I'd done it in Clojure before, but I used the same process on the command line, and starting the JVM is just too painful for words. So I tried writing it in JS for node, and was very happy with both the outcome and the speed of it. This led me back to trying the same thing in ClojureScript.<br />
<h3>
ClojureScript Hello World Process</h3>
The first thing I try doing with ClojureScript and Node is getting a Hello World program going. It had been months since I'd tried this, so I thought I should try again. Once again, the process did not work smoothly so I thought I'd write it down here.<br />
<br />
Clojure can be tricky to use without Leiningen, so this is typically my first port of call for any Clojure-based project. I started out with "lein new hello" to start the new project. This creates a "hello" directory, along with various standard sub-directories to get going.<br />
<br />
The project that has been created is a Clojure project, rather than ClojureScript, so the project file needs to be updated to work with ClojureScript instead. This means opening up <span style="font-family: Courier New, Courier, monospace;">project.clj</span> and updating it to include <a href="https://github.com/emezeske/lein-cljsbuild">lein-cljsbuild</a> along with the configuration for building the ClojureScript application.<br />
<br />
The lein-cljsbuild build system is added by adding a plugin, and a configuration to the project. I also like being able to run "lein compile" and this needs a "hook" to be added as well:<br />
<br />
<pre><code> :hooks [leiningen.cljsbuild]
:plugins [[lein-cljsbuild "0.3.0"]]
:cljsbuild {
:builds [{
:source-paths ["src/hello"]
:compiler { :output-to "src/js/hello.js"
:target :nodejs
:optimizations :simple
:pretty-print true }}]}</code></pre>
<br />
Some things to note here:<br />
<ul>
<li>The source path (a sequence of paths, but with just one element here) is where the ClojureScript code can be found.</li>
<li>The <span style="font-family: Courier New, Courier, monospace;">:output-to</span> can be anywhere, but since it's source code and I wanted to look at it, I put it back into <span style="font-family: Courier New, Courier, monospace;">src</span>, albeit in a different directory.</li>
<li>The <span style="font-family: Courier New, Courier, monospace;">:target</span> has been set to <span style="font-family: Courier New, Courier, monospace;">:nodejs</span>. Without this the system won't be able to refer to anything in Node.js.</li>
<li>The <span style="font-family: Courier New, Courier, monospace;">:optimizations</span> are set to <span style="font-family: Courier New, Courier, monospace;">:simple</span>. They can also be set to <span style="font-family: Courier New, Courier, monospace;">:advanced</span>, but nothing else. (More on this later).</li>
</ul>
To start with, I ignored most of this and instead got it all running at the REPL. That meant setting up the dependencies in the project file to include cljs-noderepl:<br />
<br />
<pre><code> :dependencies [[org.clojure/clojure "1.5.0"]
[org.clojure/clojurescript "0.0-1586"]
[org.bodil/cljs-noderepl "0.1.7"]]</code></pre>
<br />
(Yes, <a href="https://twitter.com/richhickey/status/307520978706112512">Clojure 1.5.0 was released last Friday</a>!)<br />
<br />
Using the REPL requires that it be started with the appropriate classpath, which lein does for you when the dependencies are set up and you use the "lein repl" command. Once there, the ClojureScript compiler just needs to be "required" and then the compiler can be called directly, with similar options to those shown in the project file:<br />
<pre><code>
</code></pre>
<pre><code>user=> (require 'cljs.closure)
nil
user=> (cljs.closure/build "src/hello" {:output-to "src/js/hello.js" :target :nodejs :optimizations :simple})
nil</code></pre>
<br />
Once you can do all of this, it's time to modify the source code:<br />
<pre><code> (ns hello.core)
(defn -main []
(println "Hello World"))</code></pre>
<pre><code>
(set! *main-cli-fn* -main)</code></pre>
<br />
The only trick here is setting *main-cli-fn* to the main function. This tells the compiler which function to run automatically.<br />
<br />
The project starts with a core.clj file, which is what you'll end up editing. Compiling this appears to work fine, but when you run the resulting javascript you get an obscure error. The fix for this is to change the source filename to core.clj<b><i>s</i></b>. Once this has been changed, the command line compiler (called with "lein compile" will tell you which files were compiled (the compiler called from the REPL will continue to be silent). If the command line compiler does not mention which files were compiled by name, then they weren't compiled.<br />
<br />
I wanted to get a better sense of what was being created by the compiler, so I initially tried using optimization options of <span style="font-family: Courier New, Courier, monospace;">:none</span> and <span style="font-family: Courier New, Courier, monospace;">:whitespace</span>, but then I got errors of undefined symbols when I tried to run the program. I reported this as a bug, but was told that this was known and an issue with Google's Closure tool (which ClojureScript uses). The <span style="font-family: Courier New, Courier, monospace;">:simple</span> optimizations seem to create semi-readable code though, so I think I can live with it.<br />
<br />
Interestingly, compiling created a directory called "out" that contained a lot of other code generated by the compiler. Inspection showed several files, including a core.cljs file that carries the Clojure core functions. I'm presuming that this gets fed into the clojurescript compiler along with the user program. The remaining files are mostly just glue for connecting things together. For instance, nodejscli.cljs contains:<br />
<br />
<pre><code>(ns cljs.nodejscli
(:require [cljs.nodejs :as nodejs]))
; Call the user's main function
(apply cljs.core/*main-cli-fn* (drop 2 (.-argv nodejs/process)))</code></pre>
<br />
This shows what ends up happening with the call to <span style="font-family: Courier New, Courier, monospace;">(set! *main-cli-fn* -main)</span> that was required in the main program.<br />
<h3>
Where Now?</h3>
Since Node.js provides access to lots of system functions, I started to wonder just how far I could push this system. Browsing the <a href="http://nodejs.org/api/">Node.js API</a> I found functions for I/O and networking, so there seems to be some decent scope in there. However, since performance is really important to <a href="http://code.google.com/p/v8/">V8</a> (which Node.js is built from) then how about fast I/O operations like memory mapping files?<br />
<br />
I was disappointed to learn that there are in fact serious limits to Node.js. However, since the engine is native code, there is nothing to prevent extending it to do anything you want. Indeed, using "require" in JavaScript will do just that. It didn't take much to find a <a href="https://github.com/bnoordhuis/node-mmap">project that wraps mmap</a>, though the author freely admits that it was an intellectual exercise and that users probably don't want to use it.<br />
<br />
Using a library like mmap in ClojureScript was straight forward, but showed up another little bug. Calling "require" from JavaScript returns a value for the object containing the module's functions. Constant values in the module can be read using the Java-interop operation. So to see the numeric value of PROT_READ, you can say:<br />
<br />
<code></code><br />
<pre><code>(defn -main []
(let [m (js/require "mmap")]
(println "value: " m/PROT_READ)))</code></pre>
<br />
<br />
However, a module import like this seems to map more naturally to Clojure's <span style="font-family: Courier New, Courier, monospace;">require</span>, so I tried a more "global" approach and def'ed the value instead:<br />
<br />
<code></code><br />
<pre><code>(def m (js/require "mmap"))
(defn -main []
(println "value: " m/PROT_READ))</code></pre>
<br />
<br />
However, this leads to an error in the final JavaScript. Fortunately, the . operator will work in this case. This also leads to one of the differences with Clojure. Using the dot operator with fields requires that the field name be referenced with a dash leader:<br />
<br />
<code></code><br />
<pre><code>(def m (js/require "mmap"))
(defn -main []
(println "value: " (.-PROT_READ m)))</code></pre>
<br />
<br />
Finally, I was able to print the bytes out of a short test file with the following:<br />
<br />
<pre><code>(ns pr.core
(:use [cljs.nodejs :only [require]]))
(def u (require "util"))
(def fs (require "fs"))
(def mmap (js/require "mmap"))
(defn mapfile [filename]
(let [fd (.openSync fs filename "r")
sz (.-size (.fstatSync fs fd))]
(.map mmap sz (.-PROT_READ mmap) (.-MAP_SHARED mmap) fd 0)))
(defn -main []
(let [buffer (mapfile "data.bin")
sz (alength buffer)]
(println "size of file: " sz)
(doseq [i (range sz)]
(print " " (aget buffer i)))
(println)))
(set! *main-cli-fn* -main)</code></pre>
<br />
Since I've now seen that it's possible, it's inspired me to think about reimplementing Mulgara's <a href="http://mulgara.org/svn/mulgara/trunk/src/jar/util-xa/java/org/mulgara/store/xa/MappedBlockFile.java">MappedBlockFile</a> and <a href="http://mulgara.org/svn/mulgara/trunk/src/jar/util-xa/java/org/mulgara/store/xa/AVLNode.java">AVLNode</a> classes for Clojure <i>and</i> ClojureScript. That would be a good opportunity to get a BTree implementation coded up as well.<br />
<br />
I've been thinking of redoing these things for Clojure ever since starting on Datomic. Mulgara's phased trees are strikingly like Datomic's, with one important exception - in Mulgara we went to a lot of effort to reap old nodes for reuse. This is complex, and had a performance cost. It was important 13 years ago when we first started on it, but things have changed since then. More recent implementations of some of Mulgara have recognized that we don't need to be so careful with disk space any more, but Rich had a clearer insight: not only <i>can</i> we keep older transactions, but we <i>should</i>. As soon as I realized that, then I realized it would be easy to improve performance dramatically in Mulgara. However, to make it worthwhile we'd have to expose the internal phase information in the same way that Datomic does.<br />
<br />
Unfortunately, Mulgara is less interesting to me at the moment, since it's all in Java, which is why I'm moving to re-implement so much RDF work in Clojure at the moment. A start to that can be found in <a href="https://github.com/quoll/CRG">crg</a>, <a href="https://github.com/quoll/crg-turtle">crg-turtle</a>, and <a href="https://github.com/quoll/kiara">kiara</a>. Mulgara isn't going away... but it will get modernized.<br />
<h3>
More in ClojureScript</h3>
<div>
Family is calling, so I need to wrap this post up. Funnily enough, I didn't get to write about the thing that made me think to blog in the first place. That's bit manipulations.</div>
<div>
<br /></div>
<div>
You'll recall that I mentioned the lack of a BigInteger in ClojureScript (since it's a Java class). As an exercise in doing more in ClojureScript, and learning about how BigInteger is implemented, I started trying to port this class in Clojure (and ultimately, ClojureScript). The plumbing is easy enough, as I can just use a record and a protocol, but some of the internals have been trickier. That's because BigInteger packs the numbers into an array of bits, that is in turn represented with an int array. More significantly, BigInteger does lots of bit manipulation on these ints.</div>
<div>
<br /></div>
<div>
Bit manipulation <i>is</i> supported in JavaScript and ClojureScript, but with some caveats. One problem is that the non-sign-extending right-shift operation (>>>) is not supported. I was surprised to learn that it isn't supported in Clojure either (this seems strange to me, since it would be trivial to implement). The bigger problem is that numbers are stored as floating point values. Fortunately, numbers of 32 bits or smaller can be treated as unsigned integers. However, integers can get larger than this limit, and if that happens, then some of the bit operations stop working. It's possible to do bit manipulation up to 53 bits, but signed values won't work as expected in that range, so it's basically off the table.</div>
<div>
<br /></div>
<div>
Anyway, I have a lot to learn yet, and a long way to go to even compare how a factorial in ClojureScript will compare to the same operation in Clojure, but in the meantime, it's fun. Searching for Node.js modules has shown a lot of early exploration is being performed by people from all over. Some of the recent threading modules are a good example of this.</div>
<div>
<br /></div>
<div>
I have a feeling that Node.js will get more important as time goes on, and with it, ClojureScript.</div>
Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com2tag:blogger.com,1999:blog-6848574.post-25243297392893975832012-08-28T12:24:00.000-05:002012-08-28T12:24:16.769-05:00Tnetstrings<h2>
A Forward Reference</h2>
Having had a couple of responses to the last post, I couldn't help but revisit the code. There's not much to report, but since I've had some feedback from a few people who are new to Clojure I thought that there were a couple of things that I could mention.<br />
<br />
First, (and you can see this in the <a href="http://www.blogger.com/comment.g?blogID=6848574&postID=623688620478134421">comments to the previous post</a>) I wrote my original code in a <a href="http://en.wikipedia.org/wiki/REPL" target="_blank">REPL</a> and pasted it into the blog. Unfortunately, this caused me to miss a forward reference to the <code>parse-t</code> function. On my first iteration of the code, I wasn't trying to parse all the data types, to the map of parsers didn't need to recurse into the <code>parse-t</code> function. However, when I updated the map, the <code>parse-t</code> function had been fully defined, so the references worked just fine.<br />
<br />
<h2>
Testing</h2>
That brings me to my second and third points: testing and <a href="http://leiningen.org/" target="_blank">Leiningen</a>. As is often the case, I found the issues by writing and running tests. Setting up an environment for tests can be annoying for some systems, particularly for such a simple function. However, using Leiningen makes it very easy. The entire project was built using Leiningen, and was set up with the simple command:<br />
<code>lein new tnetstrings</code>.
<br />
I'll get on to Leiningen in a moment, but for now I'll stick with the tests.<br />
<br />
Clojure tests are easy to set up and use. They are based on a <acronym title="Domain Specific Language">DSL</acronym> built out of a set of macros that are defined in <code>clojure.test</code>. The two main macros are <code>deftest</code> and <code>is</code>. <code>deftest</code> is used to define a test in the same way that <code>defn</code> is used to define a function, sans a parameter definition. In fact, a test <em>is</em> a function, and can be called directly (it takes no parameters). This is very useful to run an individual test from a REPL.<br />
<br />
The other main macro is called "<code>is</code>" and is simply used to assert the truth of something. This macro is used inside a test.<br />
<br />
My tests for tnetstrings are very simple:<br />
<br />
<pre><code>(ns tnetstrings.test.core
(:use [tnetstrings.core :only [parse-t]]
[clojure.test]))
(deftest test-single
(is (= ["hello" ""] (parse-t "5:hello,")))
(is (= [42 ""] (parse-t "2:42#")))
(is (= [3.14 ""] (parse-t "4:3.14^")))
(is (= [true ""] (parse-t "4:true!")))
(is (= [nil ""] (parse-t "0:~"))))
(deftest test-compound
(is (= [["hello" 42] ""] (parse-t "13:5:hello,2:42#]")))
(is (= [{"hello" 42} ""] (parse-t "13:5:hello,2:42#}")))
(is (= [{"pi" 3.14, "hello" 42} ""] (parse-t "25:5:hello,2:42#2:pi,4:3.14^}"))))</code></pre>
<br />
<br />
Note that I've brought in the <code>tnetstrings.core</code> namespace (my source code), and only referenced the <code>parse-t</code> function. I always try to list the specific functions I want in a <code>use</code> clause, though I'm not usually so particular when writing test code. You'll also see <code>clojure.test</code>. As mentioned, this is necessary for the <code>deftest</code> and <code>is </code> macros. It is worth pointing out that both of these <code>use</code> clauses were automatically generated for me by Leiningen, along with a the first <code>deftest</code>.<br />
<br />
I could have created a convenience function that just extracted the first element out of the returned tuple, thereby making the tests more concise. However, I intentionally tested the entire tuple, to ensure that nothing was being left at the end. I ought to create a string with some garbage at the end as well, to see that being returned, but the array and map tests have this built in... and I was being lazy.<br />
<br />
Something else that caught me out was that when I parse a floating point number, I did it with <code>java.lang.Float/parseFloat</code>. This worked fine, but by default Clojure uses <code>double</code> values instead, and all floating point literals are parsed this way. Consequently the tests around <code>"4:3.14^"</code> failed with messages like:<br />
<br />
<pre><code>expected: (= [3.14 ""] (parse-t "4:3.14^"))
actual: (not (= [3.14 ""] [3.14 ""]))</code></pre>
<br />
What isn't being shown here is that the two values of 3.14 have different types (float vs. double). Since Clojure prefers <code>double</code>, I changed the parser to use <code>java.lang.Double/parseDouble</code> and the problem was fixed.
<br />
<br />
<h2>
Leiningen</h2>
For anyone unfamiliar with Leiningen, here is a brief rundown of what it does. By running the <code>new</code> command Leiningen sets up a directory structure and a number of stub files for a project. By default, two of these directories are <code>src/</code> and <code>test</code>. Under <code>src/</code> you'll find a stub source file (complete with namespace definition) for the main source code, and under <code>test/</code> you'll find a stub test file, again with the namespace defined, and with <code>clojure.test</code> already brought in for you. In my case, these two files were:<br />
<br />
<ul>
<li><code>src/tnetstrings/core.clj</code></li>
<li><code>test/tnetstrings/test/core.clj</code></li>
</ul>
<br />
To get running, all you have to do is put your code into the <code>src/</code> file, and put your tests into the <code>test/</code> file. Once this is done, you use the command:<br />
<code>lein test</code><br />
to run the tests. Clojure gets compiled as it is run, so any problems in syntax and grammar can be found this way as well.<br />
<br />
However, one of the biggest advantages to using this build environment, is the ease of bringing in libraries. Using Leiningen can be similar to using <a href="http://maven.apache.org/">Maven</a>, without much of the pain, and indeed, Leiningen even offers a <code>pom</code> command to generate a Maven POM file. It automatically downloads packages from both <a href="https://clojars.org/">Clojars</a> and Maven repositories, so this feature alone makes it valuable.<br />
<br />
Leiningen is configured with a file called <code>project.clj</code> which is autogenerated when a project is created. This file is relatively easy to configure for simple things, so rather than delving into it here, I'll let anyone new to the system go the project page and <a href="https://github.com/technomancy/leiningen/blob/master/sample.project.clj" target="_blank">sample file</a> to learn more about it.<br />
<br />
<code>project.clj</code> also works for some not-so-simple setups, but it gets more and more difficult the fancier it gets. It's relatively easy to update the source path, test path, etc, to mimic Maven directory structures, which can be useful, since the Maven structure allows different file types (e.g. Java sources, resources) to be stored in different directories. But since I always want this, it's annoying that I always have to manually configure it.<br />
<br />
I'm also in the process of copying <a href="https://github.com/alexhall/lein-antlr" target="_blank">Alex Hall's setup</a> for pre-compiling <a href="http://www.antlr.org/">Antlr</a> parser definitions so that I can do the same with <a href="http://beaver.sourceforge.net/">Beaver</a>. Again, it's great that I <em>can</em> do this with Leiningen, but it's annoying to do so. I shouldn't be too harsh though, as the way that extensions are done look more like they are derived from the flexibility of Clojure than Leiningen itself.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-6236886204781344212012-08-22T00:00:00.001-05:002012-08-22T00:00:20.809-05:00Clojure DC<p>Tonight was the first night for <a href="http://www.meetup.com/ClojureDC/">Clojure DC</a>, which is a meetup group for Clojure users. It's a bit of a hike for me to get up there, but I dread getting too isolated from other professionals, so I decided it was worth making the trip despite the distance and traffic. Luckily, I was not disappointed.</p>
<p>Although I was late (have you ever tried to cross the 14th Street Bridge at 6pm? Ugh) not much had happened beyond som pizza consumption. The organizers, Matt and Chris, had a great venue to work with, and did a good job of getting the ball rolling.</p>
<p>After introductions all around, Matt and Chris gave a description of what they're hoping to do with the group, prompted us with ideas for future meetings, and asked for feedback. They suggested perhaps doing some <a href="https://github.com/functional-koans/clojure-koans">Clojure Koans</a>, and in that spirit they provided new users with an introduction to writing Clojure code (not an intro to Clojure, but an intro to <em>writing</em> code), by embarking on a function to parse <a href="http://tnetstrings.org/">tnetstrings</a>. I'd never heard of these, but they're a similar concept to <a href="http://www.json.org/">JSON</a>, only the encoding and parsing is even simpler.</p>
<p>This part of the presentation was fun, since Matt and Chris had a banter that was reminiscent of <a href="http://blip.tv/clojure/dan-friedman-and-william-byrd-minikanren-5936333">Daniel Friedman and William Byrd presenting miniKanren</a> at <a href="http://clojure-conj.org/">Clojure/Conj</a> last year. While writing the code they asked for feedback, and I was pleased to learn a few things from some of the more experienced developers who'd shown up (notably, Relevance employee <a href="http://thinkrelevance.com/team/members/craig-andera">Craig Andera</a>, and ex-<a href="http://clojure.com/">Clojure.core</a> developer, and co-author of <a href="http://www.joyofclojure.com/">my favorite Clojure book</a>, <a href="http://fogus.me/">Michael Fogus</a>). For instance, while I knew that maps operate as functions where they look up an argument in themselves, I did not know that they can optionally accept a "not-found" parameter like <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/get">clojure.core/get</a> does. I've always used "get" to handle this in the past, and it's nice to know I can skip it.</p>
<p>While watching what was going on, I decided that a <a href="http://clojure.org/other_functions#Other%20Useful%20Functions%20and%20Macros-Regex%20Support">regex</a> would work nicely. So I ended up giving it a go myself. The organizers stopped after parsing a string and a number, but I ended up doing the lot, including maps and arrays. Interestingly, I decided I needed to return a tuple, and after I finished I perused the reference Python implementation and discovered that this returned the same tuple. Always nice to know when you're on the right track. :-)</p>
<p>Anyway, my attempt looked like:</p>
<pre><code>(ns tnetstrings.core)
(def type-map {\, identity
\# #(Integer/parseInt %)
\^ #(Float/parseFloat %)
\! #(Boolean/parseBoolean %)
\~ (constantly nil)
\} (fn [m] (loop [mp {} remainder m]
(if (empty? remainder)
mp
(let [[k r] (parse-t remainder)
[v r] (parse-t r)]
(recur (assoc mp k v) r)))))
\] (fn [m] (loop [array [] remainder m]
(if (empty? remainder)
array
(let [[a r] (parse-t remainder)]
(recur (conj array a) r)))))})
(defn parse-t [msg]
(if-let [[header len] (re-find #"([0-9]+):" msg)]
(let [head-length (count header)
data-length (Integer/parseInt len)
end (+ data-length head-length)
parser (type-map (nth msg end) identity)]
[(parser (.substring msg head-length end)) (.substring msg (inc end))])))</code></pre>
<p>There are lots of long names in here, but I wasn't trying to play "<a href="http://codegolf.com/">golf</a>". The main reason I liked this was because of the if-let I introduced. It isn't perfect, but if the data doesn't start out correctly, then the function just returns nil without blowing up.</p>
<p>While this worked, it was bothering me that both the array and the map forms looked so similar. I thought about this in the car on the way home, and I recalled the handy equivalence:</p>
<pre><code>(= (assoc m k v) (conj m [k v]))</code></pre>
<p>So with this in hand, I had another go when I got home:</p>
<pre><code>(ns tnetstrings.core)
(defn embedded [s f]
(fn [m] (loop [data s remainder m]
(if (empty? remainder)
data
(let [[d r] (f remainder)]
(recur (conj data d) r))))))
(def type-map {\, identity
\# #(Integer/parseInt %)
\^ #(Float/parseFloat %)
\! #(Boolean/parseBoolean %)
\~ (constantly nil)
\} (embedded {} (fn [m] (let [[k r] (parse-t m)
[v r] (parse-t r)]
[[k v] r])))
\] (embedded [] (fn [m] (let [[a r] (parse-t m)]
[a r])))})
(defn parse-t [msg]
(if-let [[header len] (re-find #"([0-9]+):" msg)]
(let [head-length (count header)
data-length (Integer/parseInt len)
end (+ data-length head-length)
parser (type-map (nth msg end) identity)]
[(parser (.substring msg head-length end)) (.substring msg (inc end))])))</code></pre>
<p>So now each of the embedded structures is based on a function returned from "embedded". This contains the general structure of:<ul><li>Seeing if there is anything left to parse.</li><li>If not, then return the already parsed data.</li><li>If so, then parse it, and add the parsed data to the structure before repeating on the remaining string to be parsed.</li></ul>In the case of the array, just one element is parsed by re-entering the main parsing function. The result is just the returned data. In the case of the map, the result is a key/value tuple, obtained by re-entering the parsing function twice. By wrapping the key/value like this we not only get to return it as a single "value", but it's also in the form required for the conj function that is used on the provided data structure (vector or map).</p>
<p>The result looks a little noisy (lots of brackets and parentheses), but I think it abstracts out the operations much better. Exercises like this are designed to help you think about problems the right way, so I think it was a great little exercise.</p>
<p>Other than this code, I also got the chance to chat with a few people, which was the whole point of the trip. It's getting late, so I won't go into those conversations now, but I was pleased to hear that many of them will be going to Clojure/Conj this year.</p>Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com5tag:blogger.com,1999:blog-6848574.post-15818770470998336962012-05-22T23:36:00.000-05:002012-05-23T07:52:09.841-05:00Clojure Lessons<p>Recently I've been working with Java code in a <a href="http://www.springsource.org/">Spring</a> framework. I'm not a big fan of Spring, since the bean approach means that everything has a similar public face, which means that the data types don't document the system very well. The bean approach also means that most types can be plugged into most places (kind of like Lego), but just because something can be connected doesn't mean it will do anything meaningful. It can make for a confusing system. As a result, I'm not really have much fun at work.<p>
<p>To get myself motivated again, I thought I'd try something fun and render a <a href="http://en.wikipedia.org/wiki/Mandelbrot_set">Mandelbrot set</a>. I know these are easy, but it's something I've never done for myself. I also thought it might be fun to do something with graphics on the JVM, since I'm always working on server-side code. Turned out that it <em>was</em> fun, keeping me up much later than I ought to have been. Being tired tonight I may end up rambling a bit. It may also be why I've decided to spell "colour" the way I grew up with, rather than the US way (except in code. After all, I have to use the <code>Color</code> class, and it's just too obtuse to have two different spellings in the same program).</p>
<p>To get my feet wet, I started with a simple Java application, with a plan to move it into Clojure. My approach gave me a class called <code>Complex</code> that can do the basic arithmetic (trivial to write, but surprising that it's not already there), and an abstract class called <code>Drawing</code> that does all of the Window management and just expects the implementing class to implement <code>paint(Graphics)</code>. With that done it was easy to write a pair of functions:</p>
<ul><li><code>coord2Math</code> to convert a canvas coordinate into a complex number.</li><li><code>mandelbrotColor</code> to calculate a colour for a given complex number (using a logarithmic scale, since linear shows too many discontinuities in colour).</li></ul>
Drawing this onto a graphical context is easy in Java:
<pre><code>for (int x = 0; x < gWidth; x++) {
for (int y = 0; y < gHeight; y++) {
g.setColor(mandelbrotColor(coord2Math(x, y)));
plot(g, x, y);
}
}</code></pre>
<p>(<code>plot(Graphics,int,int)</code> is a simple function that draws one pixel at the given location).</p>
<p>A small image (300x200 pixels) on this MacBookPro takes ~360ms. A big one (1397x856) took ~11500ms. Room for improvement, but it'll do. So with a working Java implementation in hand, I turned to writing the same thing in Clojure.</p>
<h3>Clojure Graphics</h3>
<p>Initially I tried extending my <code>Drawing</code> class using <code>proxy</code>, with a plan of moving to an implementation completely in Clojure. However, after getting it working that way I realized that doing the entire thing in Clojure wasn't going to take much at all, so I did that straight away. The resulting code is reasonably simple and boilerplate:</p>
<pre><code>(def window-name "Mandelbrot")
(def draw-fn)
(defn new-drawing-obj []
(proxy [JPanel] []
(paint [^Graphics graphics-context]
(let [width (proxy-super getWidth)
height (proxy-super getHeight)]
(draw-fn graphics-context width height)))))
(defn show-window []
(let [^JPanel drawing-obj (new-drawing-obj)
frame (JFrame. window-name)]
(.setPreferredSize drawing-obj (Dimension. default-width default-height))
(.add (.getContentPane frame) drawing-obj)
(doto frame
(.setDefaultCloseOperation JFrame/EXIT_ON_CLOSE)
(.pack)
(.setBackground Color/WHITE)
(.setVisible true))))
(defn start-window []
(SwingUtilities/invokeLater #(show-window)))
</code></pre>
<p>Calling <code>start-window</code> sets off a thread that will run the event loop and then call the <code>show-window</code> function. That function uses <code>new-drawing-obj</code> to create a proxy object that handles the <code>paint</code> event. Then it sets the size of panel, puts it into a frame (the main window), and sets up the frame for display.</p>
<p>The only thing that seems worth noting from a Clojure perspective is the proxy object returned by <code>new-drawing-obj</code>. This is simple extension of <code>java.swing.JPanel</code> that implements the <code>paint(Graphics)</code> method of that class. Almost every part of the drawing can be done in an external function (<code>draw-fn</code> here), but the width and height are obtained by calling <code>getWidth()</code> and <code>getHeight()</code> on the <code>JPanel</code> object. That object isn't directly available to the <code>draw-fn</code> function, nor is it available through a name like "this". The object is returned from the <code>proxy</code> function, but that's out of scope for the <code>paint</code> method to access it. The only reasonable way to access methods that are inherited in the proxy is with the <code>proxy-super</code> function (I can think of some unreasonable ways as well, like setting a reference to the proxy, and using this reference in <code>paint</code>. But we won't talk about that kind of abuse).</p>
<p>While I haven't shown it here, I also wanted to close my window by pressing the "q" key. This takes just a couple of lines of code, whereby a proxy for <code>KeyListener</code> is created, and then added to the frame via <code>(.addKeyListener the-key-listener-proxy)</code>. Compared to the equivalent code in Java, it's strikingly terse.</p>
<h3>Rendering</h3>
<p>The Java code for rendering used a pair of nested loops to generate coordinates, and then calculated the colour for each coordinate as it went. However, this imperative style of coding is something to explicitly avoid in any kind of functional programming. So the question for me at this point, was how should I think about the problem?</p>
<p>Each time the <code>mandelbrotColor</code> was to be called, it is mapping a coordinate to a colour. This gave me my first hint. I needed to <em>map</em> coordinates to colours. This implies calling <code>map</code> on a seq of coordinates, and ending up with a seq of colours. (Actually, not a <em>seq</em>, but rather a <a href="http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html"><em>reducible</em> collection</a>). However, what order are the colours in? Row-by-row? That would work, but it would involve keeping a count of the offset while working over the seq, which seems onerous, particular when the required coordinates were available when the colour was calculated in the first place. So why not include the coordinates in the seq with the colour? Not only does that simplify processing, it makes the rendering of this map stateless, since any element of the seq could be rendered independently of any other.</p>
<p>Coordinates can be created as pairs of integers using a comprehension:</p>
<pre><code> (for [x (range width) y (range height)] [x y])</code></pre>
<p>and the calculation can be done by mapping on a function that unpacks x and y and returns a triple of these two coordinates along with the calculated colour. I'll rename x and y to "a" and "b" in the mapping function to avoid ambiguity:</p>
<pre><code> (map (fn [[a b] [a b (mandelbrot-color (coord-2-math a b))])
(for [x (range width) y (range height)] [x y]))</code></pre>
<p>So now we have a sequence of coordinates and colours, but how do these get turned into an image? Again, the form of the problem provides the solution. We have a sequence (of tuples), and we want to reduce it into a single value (an image). Reductions like this are done using <code>reduce</code>. The first parameter for the reduction function will be the image, the second will be the next tuple to draw, and the result will be a new image with the tuple drawn in it. The <code>reduce</code> function isn't really supposed to mutate its first parameter, but we don't want to keep the original image without the pixel to be drawn, so it works for us here. The result is the following reduction function (type hint provided to avoid reflection on the graphical context):</p>
<pre><code> (defn plot [^Graphics g [x y c]]
(.setColor g c)
(.fillRect g x y 1 1)
g)</code></pre>
<p>Note that the original graphics context is returned, since this is the "new" value that plot has created (i.e. the image with the pixel added to it). Also, note that the second parameter is a 3 element tuple, which is just unpacked into x y and c.</p>
<p>So now the entire render process can be given as:</p>
<pre><code>(reduce plot g
(map (fn [[a b] [a b (mandelbrot-color (coord-2-math a b))])
(for [x (range width) y (range height)] [x y])))</code></pre>
<p>This works just fine, but there were performance issues, which was the part of this process that was most interesting. The full screen render (1397x856) took ~682 seconds (up from the 11.5 seconds it took Java). Obviously there were a few things to be fixed. There is still more to do, but I'll share what I came across so far.</p>
<h3>Reflection</h3>
<p>The first thing that <a href="http://objectcommando.com/blog/">@objcmdo</a> suggested was to look for reflection. I planned on doing that, but thought I'd continue cleaning the program up first. The <code>Complex</code> class was still written in Java, so I embarked on rewriting that in Clojure.</p>
<p>The easiest way to do this was to implement a protocol that describes the actions (plus, minus, times, divide, absolute value), and to then define a record (of real/imaginary) that extends the protocol. It would have been nicer than the equivalent Java, but for one thing. Java allows method overloading based on parameter types, which means that a method like <code>plus</code> can be defined differently depending on whether it receives a <code>double</code> value, or another <code>Complex</code> number. My understanding is that Clojure only overloads functions based on the parameter count, meaning that different function names are required to redefine the same operation for different types. So for instance, the <code>plus</code> functions were written in Java as:</p>
<pre><code> public final Complex plus(Complex that) {
return new Complex(real + that.real, imaginary + that.imaginary);
}
public final Complex plus(double that) {
return new Complex(real + that, imaginary);
}</code></pre>
But in Clojure I had to give them different names:
<pre><code> (plus [this {that-real :real, that-imaginary :imaginary}]
(Complex. (+ real that-real) (+ imaginary that-imaginary)))
(plus-dbl [this that] (Complex. (+ real that) imaginary))
</code></pre>
<p>Not a big deal, but code like math manipulation looks prettier when function overloading is available.</p>
<p>It may be worth pointing out that I used the names of the operations (like "plus") instead of the symbolic operators ("+"). While the issue of function overloading would have made this awkward (<code>+dbl</code> is no clearer than <code>plus-dbl</code>) it has the bigger problem of clashing with functions of the same name in clojure.core. Some namespaces do this (the <code>*</code> character is a popular one to reuse), but I don't like it. You have to explicitly reject it from your current namespace, and then you need to refer to it by its full name if you do happen to need it. Given that <code>Complex</code> needs to manipulate internal numbers, then these original operators <em>are</em> needed.</p>
<p>So I created my protocol containing all the operators, defined a <code>Complex</code> record to implement it, and then I replaced all use of the original Java <code>Complex</code> class. Once I was finished I ran it again just to make sure that I hadn't broken anything.</p>
<p>To my great surprise, the full screen render went from 682 seconds down to 112 seconds. Protocols are an efficient mechanism, but they shouldn't be <em>that</em> good. At that point I <a href="http://dictionary.reference.com/browse/realise">realised</a> that I hadn't used type hints around the <code>Complex</code> class, and that as a consequence the Clojure code had to perform reflection on the complex numbers. Just as @objcmdo had suggested.</p>
<p>Wondering what other reflection I may have missed, I tried enabling the <code>*warn-on-reflection*</code> flag in the <a href="http://en.wikipedia.org/wiki/REPL">repl</a>, but no warnings were forthcoming. I suspect that this was being subverted by the fact that the code is all being run by a thread that belongs to the Swing runtime. I tried adding some other type hints, but nothing I added had any effect, meaning that the Clojure compiler was already able to figure out the types involved (or else it just wasn't in a critical piece of code).</p>
<h3>Composable Abstractions</h3>
<p>The next thing I wondered about was the map/reduce part of the algorithm. While it made for elegant programming, it was creating unnecessary tuples at every step of the way. Could these be having an impact?</p>
<p>Once you have a nice list comprehension, it's tough to break it out into an imperative-style loop. Aside from ruining the elegance of the original construct, once you've seen your way through to viewing a problem in such clear terms, it's difficult to reconceptualize it as a series of steps. Even when you do, how do you make Clojure work against itself?</p>
<p>Creating a loop without burning through resources can be done easily with tail recursion. Clojure doesn't do this automatically (since the JVM does not provide for it), but it can be emulated well with <em>loop/recur</em>. Since I want to loop between 0 (inclusive) and the width/height (exclusive), I decremented the upper limits for convenience. Also, the <code>plot</code> function is no longer constraint to just 2 arguments, so I changed the definition to accept all 4 arguments directly, thereby eliminating the need to construct that 3-tuple:</p>
<pre><code>(let [dwidth (dec width)
dheight (dec height)]
(loop [x 0 y 0]
(let [[next-x next-y] (if (= x dwidth)
(if (= y dheight)
[-1 -1] ;; signal to terminate
[0 (inc y)])
[(inc x) y])]
(plot g x y (mandelbrotColor (coord-2-math x y)))
(if (= -1 next-x)
:end ;; anything can be returned here
(recur next-x next-y)))))</code></pre>
<p>My word, that's ugly. The <code>let</code> that assigns <code>next-x</code> and <code>next-y</code> has a nasty nested <code>if</code> construct that increments x and resets it at the end of each row. It also returns a flag (could be any invalid number, such as the keyword <code>:end</code>) to indicate that the loop should be terminated. The loop itself terminates by testing for the termination value and returning a value that will be ignored.</p>
<p>But it all works as intended. Now instead of creating a tuple for every coordinate, it simply iterates through each coordinate and plots the point directly, just as the Java code did. So what's the performance difference here?</p>
<p>So far, the numbers I've provided are rounded to the nearest second. Repeated runs have usually taken a similar amount of time to the ones that I've reported here. However, there is always some jitter, sometimes by several seconds. Because of this, I was unable to see any difference whatsoever between using <code>map/reduce</code> on a <code>for</code> comprehension, versus using <code>loop/recur</code>.</p>
<p>That's an interesting result, since it shows that the Clojure compiler and JVM are indeed as clever as we're told, when we see that better abstractions are just as efficient as the direct approach. It's all well and good for a language to make it easy to write powerful constructs, but being able to perform more elegant code just as efficiently as more direct, imperative code that a language is really offering useful power.</p>
<p>Aside from the obvious clarity issues, the composability of the <code>for/map/reduce</code> makes an enormous difference. Because each element in the range being mapped is completely independent, we are free to use the <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/pmap"><code>pmap</code> function</a> instead of <code>map</code>. The documentation claims that this function is,</p>
<blockquote>"Only useful for computationally intensive functions where the time of f dominates the coordination overhead."</blockquote>
<p>Yup. That's us.</p>
<p>So how much does this change make for us? Using <code>map</code> on the current code, a full screen render takes 112 seconds. Changing <code>map</code> to <code>pmap</code> improves it to 75 seconds. That's a 33% improvement with no work, simply because the correct abstraction was applied. That's a very powerful abstraction.</p>
<h3>Future Work</h3>
<p>(Hmmm, that makes this sound like an academic paper. Should I be drawing charts?)</p>
<p>The final result is still a long way short of the 11.5 seconds the naïve Java code renders at. The single threaded version is particularly bad, taking about 10 times as long. I don't expect Clojure to be as fast as Java, but a factor of 10 suggests that there are some obvious things that I've missed, most likely related to reflection. If I can get it down to the same order of magnitude as the Java code, then using <code>pmap</code> could make the Clojure version faster due to being multi-threaded. Of course, Java can be multi-threaded as well, but the effort and infrastructure for doing this would be significant.</p>Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com3tag:blogger.com,1999:blog-6848574.post-45644810766202234202011-09-04T12:24:00.003-05:002011-09-08T08:47:31.086-05:00<h3>SPARQL JSON</h3> After commenting the other day that Fuseki was ignoring my request for a JSON response, I was asked to submit a bug report. It's easy to cast unsubstantiated stones in a blog, but a bug report is a different story, so I went back to confirm everything. In the process, I looked at the SPARQL 1.1 Query Results JSON Format spec (I can't find a public copy of it, so no link, sorry. UPDATE: <a href="http://www.w3.org/2009/sparql/docs/json-results/json-results-lc.html">Found it</a>.) and was chagrined to discover that it has its own MIME type of "application/sparql-results+json". I had been using the JSON type of "application/json", and this indeed does not work, but the corrected type does. I don't think it's a good idea to ignore "application/json" so I'll report that, but strictly speaking it's correct (at least, I think so. As Andy said to me in a back channel, I don't really know what the + is supposed to mean in the subtype). So Fuseki got it right. Sorry.<br /><br />When I finally get around to implementing this for Mulgara I'll try to handle both. Which reminds me... I'd better get some more SPARQL 1.1 implemented. My day job at Revelytix needs me to do some Mulgara work for a short while, so that may help get the ball rolling for me.<br /><br /><h3>separate</h3> I commented the other day that I wanted a Clojure function that takes a predicate and a seq, and returns two seqs: one that matches the predicate and the other that doesn't match. I was thinking I'd build it with a <code>loop</code> construct, to avoid recursing through the seq twice.<br /><br />The next day, Gary (at work) suggested the <a href="http://clojuredocs.org/clojure_contrib/clojure.contrib.seq-utils/separate">separate</a> function from <a href="http://clojuredocs.org/clojure_contrib">Clojure Contrib</a>. I know there's some good stuff in contrib, but unfortunately I've never taken the time to fully audit it.<br /><br />The implementation of this function is obvious:<pre><code>(defn separate [f s]<br /> [(filter f s) (filter (complement f) s)])</code></pre>I was disappointed to learn that this function iterates twice, but Gary pointed out that there had been a discussion on exactly this point, and the counter argument is that this is the only way to build the results lazily. That's a reasonable point, and it is usually one of the top considerations for Clojure implementations. I don't actually have cause for complaint anyway, since the seqs I'm using are always small (being built out of query structures).<br /><br />This code was also a good reminder that <code>(complement x)</code> offers a concise replacement for the alternative code:<pre><code> #(not x %)</code></pre>By extension, it's a reminder to brush up on my idiomatic Clojure. I should finish the book <a href="http://joyofclojure.com/">The Joy of Clojure</a> (which is a thoroughly enjoyable read, by the way).Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com4tag:blogger.com,1999:blog-6848574.post-54061019217872008742011-09-01T20:45:00.003-05:002011-09-02T01:30:04.656-05:00<h3>ASTs</h3>
<br />After yesterday's post, I noticed that a number of references came to me via Twitter (using a <a href="http://t.co/">t.co</a> URL). After looking for it, I realized that I'm linked to from the <a href="http://planet.clojure.in/">Planet Clojure</a> page. I'm not sure why, though I guess it's because I'm working for <a href="http://revelytix.com/">Revelytix</a>, and we're one of the larger Clojure shops around. Only my post wasn't about Clojure - it was about JavaScript. So now I feel obligated to write something about Clojure. Besides, I'm out of practice with my writing, so it would do me good.
<br />
<br />So the project I spend most of my time on (for the moment) is called <a href="http://revelytix.com/content/rex-ea">Rex</a>. It's a <a href="http://www.w3.org/standards/techs/rif#w3c_all">RIF</a> based rules engine written entirely in Clojure. It does lots of things, but the basic workflow for running rules is:<ol><li>Parse the rules out of the rule file and into a <a href="http://en.wikipedia.org/wiki/Concrete_syntax_tree">Concrete Syntax Tree</a>. We can parse the RIF XML format, the RIF Presentation Syntax, and the <a href="http://www.w3.org/TR/2011/NOTE-rif-in-rdf-20110512/">RIF RDF format</a>, and we plan on doing others.</li><li>Run a set of transformations on the CST to convert it into an appropriate <a href="http://en.wikipedia.org/wiki/Abstract_syntax_tree">Abstract Syntax Tree</a> (AST). This can involve some tricky analysis, particularly for aggregates (which are a RIF extension).</li><li>Transformation of the AST into <a href="http://www.w3.org/TR/sparql11-query/">SPARQL 1.1 query</a> fragments.</li><li>Execute the engine, by processing the rules to generate more data until the capacity to generate new data has been exhausted. (My... <em>that's</em> a lot of hand waving).</li></ol>It's step 3 that I was interested in today.
<br />
<br />The rest of this post is about how Rex processes a CST into an AST using Clojure, and about some subsequent refactoring that went on. You have been warned...
<br />
<br />When I first wrote the CST to AST transformation step, it was to do a reasonably straight forward analysis of the CST. Most importantly, I needed to see the structure of the rule so that I could see what kind of data it depends on, thereby figuring out which other rules might need to be run once a given rule was executed. Since the AST is a tree structure, this made for relatively straight forward recursive functions.
<br />
<br />Next, I had to start identifying some CST structures that needed to be changed in the AST. This is where it got more interesting. Again, I had to write recursive functions, but instead of simply analyzing the data, it had to be changed. It turns out that this is handled easily by having a different function for each type of node in the tree. In the normal case the function then recurses on all of its children, and constructs an identical node type using the new children. The leaf nodes then just return themselves. The "different function" for each type is actually accessed with the same name, but dispatches on the function type. In Java that would need a visitor pattern, or perhaps a map of types to functors, but in Clojure it's handled trivially with <a href="http://clojure.org/multimethods">multimethods</a> or <a href="http://clojure.org/Protocols">protocols</a>. Unfortunately, the online resources for describing the multi-dispatch aspects of Clojure protocols are not clear, but Luke VanderHart and Stuart Sierra's book <a href="http://www.apress.com/9781430272311">Practical Clojure</a> covers it nicely.
<br />
<br />As an abstract example of what I mean, say I have an AST consisting of Conjunctions, Disjunctions and leaf nodes. Both Conjunctions and Disjunctions have a single field that contains a seq of the child nodes. These are declared with:<pre><code>(defrecord Conjunction [children])
<br />(defrecord Disjunction [children])</code></pre>The transformation function can be called <code>tx</code>, and I'll define it with multiple dispatch on the node type using multimethods:<pre><code>(defmulti tx class)
<br />
<br />(defmethod tx Disjunction [{c :children}]
<br /> (Disjunction. (map tx c)))
<br />
<br />(defmethod tx Conjunction [{c :children}]
<br /> (Conjunction. (map tx c)))
<br />
<br />(defmethod tx :default [n] n)
<br /></code></pre>This will reconstruct an identical tree to the original, though with all new nodes except the leaves. Now, the duplication in the Disjunction and Conjunction methods should be ringing alarm bells, but in real code the functions have more specific jobs to do. For instance, the Conjunction may want to group terms that meet a certain condition (call the test "<code>special-type?</code>") into a new node type (call it "<code>Foo</code>"):<pre><code>;; A different definition for tx on Conjunction
<br />(defmethod tx Conjunction [{c :children}]
<br /> (let [new-children (map tx c)
<br /> special-nodes (filter special-type? new-children)
<br /> other-nodes (filter (comp not special-type?) new-children)]
<br /> (Conjunction. (conj other-nodes (Foo. special-nodes)))))</code></pre>Hmmm... while writing that example I realized that I regularly run into the pattern of filtering out everything that meets a test, and everything that fails the test. Other than having to test everything twice, it seems too verbose. What I need is a function that will take a seq and a predicate and return a tuple containing a seq of everything that matches the predicate, and a second seq of everything that fails the predicate. I'm not seeing anything like that right now, so that may be a job for the morning.
<br />
<br />I should note, that there is no need for a function to return the same type that came into it. There are several occasions where Rex returns a different type. For example, a conjunction between a Basic Graph Pattern (BGP) and the negation of another BGP becomes a MINUS operation between the two BGPs (a Basic Graph Pattern comes from SPARQL and is just a triple pattern for matching against the subject/predicate/object of a triple in an RDF store).
<br />
<br />Overall, this approach works very well for transforming the CST into the full AST. As I've needed to incorporate more features and optimizations over time, I found that I had two choices. Either I could expand the complexity of the operation for every type in the tree processing code, or I could perform different types of analysis on the entire tree, one after another. The latter makes the process far easier to understand, making the design more robust and debugging easier, so that's how Rex has been written. It makes analysis slightly slower, but analysis is orders of magnitude faster than actually running the rules, so that is not a consideration.
<br />
<br /><h3>Threading</h3> The first time through, analysis was simple:<pre><code>(defn analyse [rules]
<br /> (process rules))</code></pre>Adding a new processing step was similarly easy:<pre><code>(defn analyse [rules]
<br /> (process2 (process1 rules)))</code></pre>But once a third, then a fourth step appeared, it became obvious that I needed to use the <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/->">Clojure threading macro</a>:<pre><code>(defn analyse [rules]
<br /> (-> rules
<br /> process1
<br /> process2
<br /> process3
<br /> process4))</code></pre>So now it's starting to look nice. Each step in the analysis process is a single function name implemented for various types. These names are then provided in a list of things to be applied to the rules, via the threading macro. There's a little more complexity (one of the steps picks up references to parts of the tree, and since each stage <em>changes</em> the tree, then these references will be pointing to an old and unused version of the tree. So that step has to be last), but it paints the general picture.
<br />
<br /><h3>Partial</h3> Another thing that I've glossed over is that each rule is actually a record that contains the AST as one of its members. The rules themselves are a seq which is in turn a member of a record containing a "Rule Program". So each process step actually ended up being like this:<pre><code>(defn process1 [rule-program]
<br /> (letfn [(process-rule [rule] (assoc rule :body (tx (:body rule))))]
<br /> (assoc rule-program :rules (map process-rule (:rules rule-prog)))))</code></pre>Did you follow that?
<br />
<br />The bottom line is replacing the :rules field with a new one. It's mapping <code>process-rule</code> onto the seq of rules, and storing the result in a new rules-program, which is what gets returned. The <code>process-rule</code> function is defined locally as associating the :body of a rule with the <code>tx</code> function applied to the existing body. This creates a new rule that has has the <code>tx</code> applied to it.
<br />
<br />This all looked fine to start with. A new rule program is created by transforming all the rules in the old program. A transformed rule is created by transforming the AST (the :body) in the old rule. But after the third analysis it became obvious that there was duplication going on. In fact, it was all being duplicated except for the <code>tx</code> step. But that was buried deep in the function. What was the best way to pull it out?
<br />
<br />To start with, the embedded <code>process-rule</code> function came out. After all, it was just inside to hide it, and not because it had to pick up a closure anywhere. This function then accepts the kind of transformation that it needs to do as a parameter:<pre><code>(defn convert-rule
<br /> [convert-fn rule]
<br /> (assoc rule :body (convert-fn (:body rule))))</code></pre>Next, we want a general function for converting all the rules, which can accept a conversion function to pass on to <code>convert-rule</code>. It does all the rules, so I just pluralized the name:<pre><code>(defn convert-rules
<br /> [conversion-fn rule-prog]
<br /> (assoc rule-prog :rules (map #(convert-rule conversion-fn %) (:rules rule-prog)))))</code></pre>That works, but now the function getting mapped is looking messy (and messy leads to mistakes). I could improve it by defining a new function, but I just factored a function out of this function. Fortunately, there is a simpler way to define this new function. It's a "partial" application of <code>convert-rule</code>. But I'll still move it into a <code>let</code> block for clarity:<pre><code>(defn convert-rules
<br /> [conversion-fn rule-prog]
<br /> (let [conv-rule (partial convert-rule conversion-fn)]
<br /> (assoc rule-prog :rules (map conv-rule (:rules rule-prog)))))</code></pre>So now my original <code>process1</code> definition becomes a simple:<pre><code>(defn process1 [rule-program]
<br /> (convert-rules tx rule-program))</code></pre>That works, but the <code>rule-program</code> parameter is just sticking out like a sore thumb. Fortunately, we've already seen how to fix this:<pre><code>(def process1 (partial convert-rules tx))</code></pre>Indeed, all of the processing functions can be written this way:<pre><code>(def process1 (partial convert-rules tx))
<br />(def process2 (partial convert-rules tx2))
<br />(def process3 (partial convert-rules tx3))
<br />(def process4 (partial convert-rules tx4))</code></pre>It may seem strange that a function is now being defined with a <code>def</code> instead of a <code>defn</code>, but it's really not an issue. It's worth remembering that <code>defn</code> is just a macro that uses <code>def</code> to attach a symbol to a call to <code>(fn ...)</code>.
<br />
<br /><h3>Documenting</h3> Of course, my functions don't appear as sterile as what I've been typing here. I do indeed use documentation. That means that the <code>process1</code> function would look more like:<pre><code>(defn process1 [rule-program]
<br /> "The documentation for process1"
<br /> (convert-rules tx rule-program))</code></pre>One of the nice features of the <code>defn</code> macro is the ease of writing documentation. This isn't as trivial with <code>def</code> since it's a special form, rather than a macro, but it's still not too hard to do. You just need to attach some metadata to the object, with a key of <code>:doc</code>. Unfortunately, I couldn't remember the exact syntax for this today, and rather then go trawling through books or existing code, <a href="http://tech.puredanger.com/">Alex</a> was kind enough to remind me:<pre><code>(def ^{:doc "The documentation for process1"}
<br /> process1 (partial convert-rules tx))</code></pre>
<br />
<br /><h3>Framework</h3> The upshot of all of this is a simpler framework for adding new steps to the existing analysis system. Adding a new analysis step just needs a new function thrown into the thread. I could put a call to <code>(partial convert-rules ...)</code> directly into the thread, but by using a <code>def</code> I get to name and document that step of the analysis. the only real work of the analysis is then done in the single multi-dispatch function, which is just as it should be.
<br />
<br />So right now my evening "hobby" has been JavaScript, while my day job is Clojure. I have to tell you, the day job is <em>much</em> more fun. Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-76200515409002907002011-09-01T10:30:00.003-05:002011-09-01T11:17:43.528-05:00<h3>Robusta</h3>I was just posting this into Google+ when I realized I was typing more than I'd intended. It read more like a short blog post. Which reminded me.... "Oh yeah. I have a blog out there somewhere! Maybe I should write this in there."
<br />
<br />After spending a lot of recent nights on JavaScript I think I'm starting to get a feel for it. I started with Douglas Crockford's "<a href="http://oreilly.com/catalog/9780596517748">JavaScript: The Good Parts</a>", and am now plowing through David Flanagan's "<a href="http://oreilly.com/catalog/9780596805524/">JavaScript: The Definitive Guide</a>". (I met David when he came to Brisbane to run a short course on Java programming back in 1996. Nice guy. He said that he'd preferred to have called his book "Java in a Demitasse", but O'Reilly wanted it in their "Nutshell" series).
<br />
<br />After using languages like Ruby, Erlang, Scala and Clojure, I'm finding it a little frustrating, but it's hard to argue with the ubiquity of the platform. Fortunately it has closures and first class functions, though the variable scoping is bizarre. I've been enjoying the callback approach to asynchronous function calls, though the syntax tends to make the resulting code confusing to read. I'm mostly sticking to Crockford's subset of the language, and this does make things a little more sensible. Flanagan's book has been filling in the gaps for me, but it's especially useful for documenting libraries like HTML5 Canvas and the File API.
<br />
<br />As with all first attempts at using a language, mine is a little messy and inconsistent. However, it can't hurt to put it out there. I've built a simple tool for working with SPARQL endpoints (specifically aimed at Jena/Fuseki, but it should mostly work on others too). The important piece is the SPARQL connection object that it comes with (found in sparql.js). I'm hoping that this will be a useful object for more general application. It can even convert XML responses into a SPARQL-JSON structure (I wrote this after discovering Fuseki was ignoring my <em>Content-type</em> settings on queries).
<br />
<br />Unfortunately, it's missing one important piece, which is the ability to upload a file from a browser. In general, it's possible to upload a file using a form submission, but that encodes all the parameters into the request body, and the SPARQL HTTP protocol requires that the graph URI appear as a parameter in the URL of the request. In an attempt to get the <strong>graph</strong> parameter out of the body and into the URL, I even tried dynamically constructing the URL for the form submission, but the browser "cleverly" saw what I was doing and pushed the parameter back into the body. So I can't use form submission. Alternatively, JavaScript makes it easy to submit an HTTP POST operation with everything set the way you want it. However, the only way to read a local file is through the form submission process, which means I still can't do a file upload. In the end, I just used Fuseki's file upload servlet, but this has the problem of being non-standard, and it also doesn't like URIs that aren't http (yes Andy, that's why I asked you about this restriction - though I'd already run into it at work).
<br />
<br />The resulting system needed a name, so I called it <em>Robusta</em>. Everyone seems to be enamored with Arabica beans, but no one ever talks about Robusta beans. Don't get me wrong.... if I had to choose between the two I'd definitely go for the Arabica. But by blending in a small portion of Robusta beans you add a richness to the flavor of your coffee (it's also used as a cheap "filler" and promotes crema in espresso, but I like the flavor aspect). At the time, I came up with the name because I really needed some caffeine, and Arabica was too obvious. But in retrospect, I like the name, since adding a bit of SPARQL to your scripts can really enhance a system (OK, that's tacky. I probably need another one of those coffees). It wasn't after I'd had some coffee that I thought to look for other projects with the same name, but by that point it was already up there.
<br />
<br />Robusta is still a work in progress, and it's mostly a late night project that fits around everything else that I'm doing. But I'm using it at work, and it makes my life easier. I'd like to know if anyone has ideas for it, or can point out errors, inefficiencies, or potential improvements. It's posted at GitHub as a part of the <a href="https://github.com/revelytix">Revelytix project group</a>, at:
<br /> <a href="https://github.com/revelytix/robusta">http://github.com/revelytix/robusta</a>Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-42972713680446490622010-10-21T23:24:00.004-05:002010-10-22T10:40:21.667-05:00<h3>Clojure Experiences</h3> I've been learning Clojure recently, as my new job uses it frequently. I'm happy with this (it has me attending <a href="http://clojure-conj.org/">(first clojure-conj)</a> right now), but I'm still learning a lot about the language. I'm comfortable with functional programming, and of course, there's next to nothing to learn about syntax and grammar, but there is a steep learning curve to effectively use the libraries. Not only are there a lot of them, but they are not really well documented.<br /><br />OO languages tell you which functions belong to a class, but since Clojure functions don't really belong to anything it can sometimes be hard to find everything that is relevant to the structures you are dealing with. I've had a few people tell me that there are good websites out there that provide the sort of documentation I'd like, so I'll go looking for them soon.<br /><br />Meanwhile, I find myself looking at the libraries all the time. Other than the general principle of code reuse, the dominant paradigm in Clojure is to use the core libraries whenever possible. This is because those libraries are likely to be much more efficient than anything you can come up with on your own. They're also "lazy", meaning that the functions do very little work themselves, with all of the work being put off until you actually need it. Laziness is a good thing, and should show up in Clojure code as much as possible. Using the core libraries is the best way to accomplish that.<br /><br /><h3>Clojure Maps</h3><br />Part of my current task is to take a data structure that represents an RDF graph, and process it. The structure that I have now maps resource identifiers (sometimes URIs, but mostly internal blank node IDs) to structures representing all of the property/values associated with that resource. Unfortunately, the values are simple identifiers too, so to find their data I have to go back to the structure and get the resources for that object. This has to be repeated until everything found has no properties or is a literal (which has no properties). Also, the anonymous resources in RDF lists are showing up as data structures, which makes them difficult to work with as well.<br /><br />Following this kind of structure is easy enough in an imperative language, though quite tedious. However, in Clojure it is simply too inelegant to work with. So I want some way to make this to a nice nested structure that represents the nested data from the RDF graph (fortunately, this kind of graph is guaranteed to not have any cycles). In particular, I'd this new structure to be lazy wherever I can get it.<br /><br />The basic approach is simple: get the property/values for a given resource. Then convert the values (resources) that aren't literals to the actual data that they represent. This is done by getting the property/values of those resources, which is what I started with, so now I know I need recursion.<br /><br />Now there are a few caveats. First off, if a resource is actually the head of a list, I don't want to just get it's properties. Instead, I want to traverse the list (lazily, of course) and get all the values attached to the nodes. That's easy enough, and is done with a function to identify resources that are lists, and another one to construct the list (a recursive function that uses <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/lazy-seq">lazy-seq</a>). Second, there is one property/value that I want to preserve, and that is the <code>:id</code> property. This property shows the URI or identifier for the original node, so the value here needs to be kept, and not expanded on (it would recurse indefinitely otherwise).<br /><br />After identifying these requirements, I figured I needed some way to convert all the key/value elements of a map into a new map. This sounds very much like converting all the elements in a sequence (a "seq") into a new sequence, a task which is done using the <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/map">map</a> function. Is there a similar operation to be applied to a map? (The noun and the verb get confusing here, sorry).<br /><br />It turns out that Alex Miller had been dealing with exactly this problem just recently, and has built a new function called <a href="http://tech.puredanger.com/2010/09/24/meet-my-little-friend-mapmap/">mapmap</a> to do this operation. However, I wasn't aware of this at the time, so I had to pursue my own solution. I'm reasonably happy with my answer, and I learnt a bit in the process, so I'm glad I solved it myself, though it would have been nice to get that sleep back.<br /><br /><h3>Morphing a Map</h3><br />There have been a few times when I've needed to create a new map, and the first time I needed to I went to clojure.core to see what kind of map-creating functions were available to me. At that point I found <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/hash-map">hash-map</a>, which takes a sequence containing an even number of items, and converts it into a map using the even-numbered offsets as the keys and the odd-numbered offsets as the values (with the first offset being "0"). So I could create a map of numbers to their labels using the following:<pre><code> (hash-map 1 "one" 2 "two" 3 "three")</code></pre>So was there a way to create this list from my existing map? (Well, yes, but was it <em>easy</em>?)<br /><br />I wanted to iterate over my existing map "entries" and create new entries. If you treat a map as a seq, then it appears as a sequence of 2-element vectors, with the first element being the key and the second element being the value. So if I want to morph the values in a map called "mymap" with a function called "morph", I could convert these pairs into new pairs with:<pre><code> (map (fn [[k v]] [k (morph v)]) mymap)</code></pre>For anyone unfamiliar with the syntax, my little anonymous function in the middle there is taking as it's single parameter a 2 element vector, and destructuring it into the values k and v, then returning a new two element vector of k and (morph v). So let's apply this to a map of numbers to their names (incidentally, I don't code anything like this, but I do blog this way)....<pre><code> (def mymap {1 "one", 2 "two", 3 "three"})<br /> (def morph #(.toUpperCase %))<br /> (map (fn [[k v]] [k (morph v)]) mymap)</code></pre>The result is:<pre><code>([1 "ONE"] [2 "TWO"] [3 "THREE"])</code></pre>Looks good, but there is one problem. This result is a sequence of pairs, while <code>hash-map</code> just takes a sequence. That's OK, we can just call <code>flatten</code> before passing it in:<pre><code> (apply hash-map (flatten (map (fn [[k v]] [k (morph v)]) mymap)))<br />{1 "ONE", 2 "TWO", 3 "THREE"}</code></pre>The "<code>apply</code>" was needed because <code>hash-map</code> needed lots of arguments (6 in this case) while the result of <code>flatten</code> would have been just 1 argument (a list).<br /><br />This is looking good. It's what I've done in the past, and it's hardly worthy of a blog post. But this time I had a new twist. My "values" were also structures, and often these structures were seqs as well. For instance, say I have a map of numbers to names in a couple of languages:<pre><code>(def langmap {1 ["one" "uno"], 2 ["two" "due"], 3 ["three" "tre"]})</code></pre>My morph method needs to change to accept a seq of strings instead of just a string, but that's trivial. So then the map step looks good, giving an output of:<pre><code>([1 ("ONE" "UNO")] [2 ("TWO" "DUE")] [3 ("THREE" "TRE")])</code></pre>But this fails to be loaded into a hash-map, with the error:<pre><code>java.lang.IllegalArgumentException: No value supplied for key: TRE (NO_SOURCE_FILE:0)</code></pre>This is exactly what bit me at 1am.<br /><br />The problem is simple. <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/flatten">flatten</a> is recursive. It dives into the sub-sequences, and brings them up too. The result is trying to map 1 to "ONE", mapping "UNO" to 2, and so on, until it gets to "TRE" at which point it doesn't has a value to map it to, so it gives an error. Fortunately for me I had an odd number of items when everything was flattened, or else I would have been working with a corrupted map and may not have discovered it for some time.<br /><br />My first thought was to write a version of flatten that isn't recursive, but that's when I took a step back and asked myself if there was <em>another</em> way to construct a map. Maybe I shouldn't be using <code>hash-map</code>. So I looked at Clojure's <a href="http://clojure.org/cheatsheet">cheatsheet</a>, and discovered <a href="http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/zipmap">zipmap</a>.<br /><br /><h3>zipmap</h3><br /><code>zipmap</code> wasn't quite what I wanted, in that instead of taking a sequence of pairs, it wants a pair of sequences. It then constructs a map where the <em>n</em>th key/value comes from the <em>n</em>th entry in the first sequence (the key) and the <em>n</em>th entry from the second sequence (the value). So I just needed to rotate, or transpose my data. Fortunately, the day before one of my co-workers had asked me exactly the same question.<br /><br />In this case, he had a sequence of MxN entries, looking something like:<pre><code>[[a1 a2 a3] [b1 b2 b3] [c1 c2 c3]]</code></pre>And the result that he wanted was:<pre><code>[[a1 b1 c1] [a2 b2 c2] [a3 b3 c3]]</code></pre>My first thought was some wildly inefficient loop-comprehension, when he asked me if the following would work (for some arg):<pre><code> (apply map list arg)</code></pre>Obviously he didn't have a repl going, so I plugged it in, and sure enough it worked. But why?<br /><br />I struggled to understand it until I looked up the documentation for <code>map</code> again, and remembered that it can take <em>multiple</em> sequences, not just one. I had totally forgotten this. When provided with <em>n</em> sequences, <code>map</code> requires a function that accepts <em>n</em> arguments. It then goes through each of the sequences in parallel, with the output being a sequence where the <em>n</em>th entry is the result of calling the function with the <em>n</em>th entry of every source sequence. In this case, the function was just <code>list</code>, meaning that it creates a list out of the items from each sequence. The <code>apply</code> just expanded the single argument out into the required lists that <code>map</code> needed. This one line is a lovely piece of elegance, and deserves its own name:<pre><code> (defn rotate [l] (apply map list l))</code></pre><br />So the final composition for creating a new map with values modified from the original map is:<pre><code> (apply zipmap (rotate (map (fn [[k v]] [k (morph v)]) langmap)))<br />{3 ("THREE" "TRE"), 2 ("TWO" "DUE"), 1 ("ONE" "UNO")}</code></pre><br /><br />Now the actual function that I use inside map isn't nearly so simple. After all, I need to recursively traverse the RDF graph, plus I need to find RDF lists and turn them into sequences, etc. But this zipmap/rotate/map idiom certainly makes it easy.<br /><br />Incidentally, I could do exactly the same thing with Alex's <code>mapmap</code>:<pre><code> (mapmap key (comp morph val) langmap)</code></pre>Oh well. I'm still happy I discovered zipmap/rotate.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com1tag:blogger.com,1999:blog-6848574.post-55818884751035990382010-05-13T12:18:00.003-05:002010-05-13T12:25:22.615-05:00<h3>Feedburner</h3><br />Something reminded me that I had RSS going through Feedburner, so I tried to look at it. Turns out that after the Google move they want to update everyone's account, and I hadn't done it yet. So I gave them all my details, and the system told me that it didn't recognize me.<br /><br />So then I said I'd forgotten my password, at which point it recognized my email address and asked a "secret question". I know the answer to the question, and it's not even that secret, since I'm sure that most of my family could figure it out, but Feedburner claims I'm wrong. Could someone have changed this? Maybe, but there are only a handful of people who would know that answer, and I trust them not to do something like that.<br /><br />Not a problem, I'll just submit a report explaining my problem. Only it seems that Feedburner is exempt from that kind of thing. The closest you can get to help is an FAQ. Good one Google.<br /><br />So now what? Well, I guess I just create a new Feedburner link and take it from there. Sorry, but if you've been using RSS to follow this blog, then would you mind changing it please? It's annoying, I know.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-7368560528239018952010-05-13T07:03:00.004-05:002010-05-13T12:00:36.718-05:00<h3>Wrongful Indexing</h3> <br />Some years ago <a href="http://gearon.blogspot.com/2004/08/proof-reading-once-again-its-way-too.html">I commented on the number and type of indexes</a> that can be used for tuples. At the time, I pointed out that indexing triples required 3 indexes, and there were 2 appropriate sets of indexes to use. Similarly, quads can be indexed with 6 indexes, and there are 4 such sets. In both cases (5-tuples get silly, requiring 10 indexes and there are 12 possible sets). In each case, I said that each set of indexes would work just as well as the others, and so I always selected the set that included the natural ordering of the tuples.<br /><br />So for RDF triples, the two sets of indexes are ordered by:<pre> subject,predicate,object<br /> predicate,object,subject<br /> object,subject,predicate</pre>and<pre> object,predicate,subject<br /> predicate,subject,object<br /> subject,object,predicate</pre>For convenience I have always chosen the first set, as this includes the natural ordering of subject/predicate/object, but it looks like I was wrong.<br /><br />In using these indexes I've always presumed random 3-tuples, but in reality the index is representing RDF. Whenever I thought about the data I was looking for, this seemed OK, but that's because I tended to think about properties on resources, and not other RDF structures. In particular, I was failing to consider lists.<br /><br />Since first building RDF indexes (2001) and writing about them (2004) I've learnt a lot about functional programming. This, in turn, led to an appreciation of lists, particularly in algorithms. I'm still not enamored of them in on-disk structures, but I do appreciate their utility and elegance in many applications. So it was only natural that when I was representing RDF graphs with Scala and I needed to read lists, then I used some trivial recursive code to build a Scala list, and it all looked great. But then I decided to port the Graph class to Java to avoid including the Scala Jars for a really lightweight library.<br /><br />I'd like to point out that I'm talking about a library function that can read a well-formed RDF list and return a list in whatever programming language the library is implemented in. The remainder of this post is going to presume that the lists are well formed, since any alternatives can never be returned as a list in an API anyway.<br /><br />Reading a list usually involves the subject/predicate/object (SPO) index. You start by looking up the head of the list as a subject, then the predicates <code>rdf:first</code> for the data at that point in the list, and <code>rdf:rest</code> for the rest of the list. Rinse and repeat until <code>rdf:rest</code> yields a value of <code>rdf:nil</code>. So for each node in the list, there is a lookup by subject, followed by two lookups by predicate. This is perfect for the SPO index.<br /><br />However, it's been bugging me that I have such a general approach, when the structure is predetermined. Why look up these two predicates so generally, when we know exactly what we want? What if we reduce the set we're looking in to just the predicates that we want and then go looking for the subjects? That would mean looking first by predicate, then subject, then object, leading to a PSO index. So what does that algorithm look like?<br /><br />First, look up the <code>rdf:rest</code> predicate, leading to an index of subject/object containing all list structures. Next, look up the <code>rdf:rest</code> predicate, retrieving subject/objects containing all the list data. Now to iterate down the list no longer involves finding the subject followed by the predicate, in order to read the next list node, but rather it just requires finding the subject, and the list node is in the corresponding object. Similarly with the data stored in the node. We're still doing a fixed number of lookups in an index, which means that the overall complexity does not change at all. Tree indexes will still give <em>O(log(N))</em> complexity, and hash indexes will still give <em>O(1)</em> complexity. However, each step can involve disk seeks, so it's worth seeing the difference.<br /><br />To compare more directly, using an SPO index requires every node to:<ul><li>Lookup across the entire graph by subject.</li><li>Lookup across the subject (2 or 3 predicates) for <code>rdf:first</code>.</li><li>Lookup across the subject (2 or 3 predicates) for <code>rdf:rest</code>.</li></ul><br />For the PSO index we have some initial setup:<ul><li>Lookup across the entire graph for the <code>rdf:first</code> predicate.</li><li>Lookup across the entire graph for the <code>rdf:rest</code> predicate.</li></ul>Then for every node:<ul><li>Lookup the <code>rdf:first</code> data for the value.</li><li>Lookup the <code>rdf:rest</code> data for the next node.</li></ul><br />It's important to note a few things, particularly for tree indexes. Trees are the most likely structure used when using a disk, so I'm going to concentrate on them. The number of subjects in a graph tends to scale up with the size of the graph, while the number of predicates is bounded. This is because predicates are used to express a model, with each predicate indicating a certain relationship. Any system trying to deal with the model needs some idea of the concepts it is dealing with, so it's <em>almost</em> impossible to deal with completely arbitrary relationships. If we know what the relationships are ahead of time, then there must be a fixed number of them. In contrast, subjects represent individuals, and these can be completely unbounded. So if we look up an entire graph to find a particular subject, then we may have to dive down a very deep tree to find that subject. Looking across the entire graph for a given predicate will never have to go very deep, because there are so few of them.<br /><br />So the first algorithm (using the SPO index) iteratively looks across every subject in the graph for each node in the list. The next two lookups are trivial, since nodes in a list will only have properties of <code>rdf:first</code>, <code>rdf:rest</code> and possibly <code>rdf:type</code>. The data associated with these properties will almost certainly be in the same block where the subject was found, meaning that there will be no more disk seeks.<br /><br />The second algorithm (using the PSO index) does a pair of lookups across every predicate in the graph. The expected number of disk seeks to find the first predicate is significantly fewer than for any of the "subject" searches in the first algorithm. Given how few predicates are in the system, then finding the second predicate may barely involve any disk seeks at all, particularly since the first search will have populated the disk cache with a good portion of the tree, and the similarities in the URIs of the predicates is likely to make both predicates very close to each other. Of course, this presumes that the predicates are even in a tree. Several systems (including one I'm writing right now) treat predicates differently because of how few there are. Indeed, a lot of systems will cache them in a hashtable, regardless of the on-disk structure. So the initial lookup is very inexpensive.<br /><br />The second algorithm then iterates down the list, just like the first one does. However, this time, instead of searching for the nodes out of every subject in the list, it will now be just searching for these nodes in the subjects that appear as list nodes. While lists are commonly used in some RDF structures, the subjects in all the lists typically form a very small minority out of all the subjects in a graph. Consequently, depending on the type and depth of trees being used, iterating through a list with the second algorithm, could result in a 2 or three (or more) fewer disk seeks for each node. That's a saving that can add up.<br /><br /><h3>Solid State Disks</h3><br />I've been talking about disk seeks, but this is an artificial restriction imposed by spinning disk drives. Solid State Disks (SSDs) don't have this limitation.<br /><br />People have been promoting solid state drives (SSDs) for some years now, but I've yet to use them myself. In fact, most people I know are still using traditional spinning platters. The prices difference is still a big deal, and for really large data, disk drives are still the only viable option. But this will change one day, so am I right to be concerned about disk seeks?<br /><br />Disk seeks are a function of data locality. When data has to be stored somewhere else on a disk, the drive head must physically seek across the surface to this new address. SSDs don't require anything to move, but there are still costs in addressing scattered data.<br /><br />While it is possible to address every bit of memory in a device in one step, in practice this is never done. This is because the complexity of the circuit grows exponentially as you try to address more and more data in one step. Instead, the memory is broken up into "banks". A portion of the address can now be used to select a bank, allowing the remaining bits in the address to select the required memory in just that bank. This works well, but it does lead to some delays. Selecting a new bank requires "setup", "hold" and "settling" times, all leading to delays. These delays are an order of magnitude smaller than seek delays for a spinning disk, but they do represent a limit on the speed of the device. So while SSDs are much faster than disk drives, there are still limits to their speed, and improvements in data locality can still have a significant impact on performance.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com2tag:blogger.com,1999:blog-6848574.post-50541738871471523252010-05-04T20:06:00.003-05:002010-05-04T21:00:35.095-05:00<h3>Web Services Solved</h3> <br />It's a been a long couple of days, and I really want to relax instead of write, but it's been a few days and I've been promising myself that I'd write, so I figured I need to get something written before I can open a beer.<br /><br />First of all, the web services problem was trivial. I recently added a new feature that allowed <a href="http://jetty.codehaus.org/jetty/jetty-6/apidocs/org/mortbay/jetty/handler/ContextHandler.html">ContextHandlers</a> in <a href="http://jetty.codehaus.org/jetty/">Jetty</a> to be configured. Currently the only configuration option I've put in there is the one that was requested, and that is the size of a form. Apparently this is 200k by default, but if you're going to load large files then that may not be enough. Anyway, the problem came about when my code tried to read the maximum form size from the configuration. I wasn't careful enough to check if the context was being configured in the first place, so an NPE was thrown if it was missing.<br /><br />Fortunately, most people would never see the problem, since the default configuration file includes details for contexts, and this ends up in every build by default. The reason I was seeing it is because Topaz replaces the configuration with their own (since it describes their custom resolvers), and this custom configuration file doesn't have the new option in it. Of course, I could just add it to Topaz, but the correct solution is to make sure that a configuration can't throw an NPE – which is exactly what I told the Ehcache guys, so it's fitting that I have to do it myself. :-)<br /><br /><h3>Hosting</h3><br />Since I'm on the topic of Topaz, it looks like te OSU/OSL guys and I have both the Topaz and Mulgara servers configured. They wouldn't typically be hosting individual projects (well, they do occasionally), but in this case it's all going in under the umbrella of Duraspace. Of course this has taken some time, and in the case of Topaz I'm still testing that it's all correct, but I think it's there. I'll be changing the DNS for Topaz over soon, and Mulgara was changed last week. Mulgara's DNS has propagated now, so I'm in the process of cutting a long-overdue release.<br /><br />One thing that changed in the hosting is that I no longer have a Linux host to build the distribution on. Theoretically, that would be OK, since I ought to be able to build on any platform. However, Mulgara is still distributed as a build for Java 1.5 (I've had complaints when I accidentally put out a release that was built for 1.6). This is easy to set up on Linux, since you just change the JAVA_HOME environment variable to make sure you're pointing to a 1.5 JDK. However, every computer I have here is a Mac. Once upon a time that didn't change anything, but now all JDKs point to JDK 1.6. That means I need to configure the compiler to output the correct version. It can be done, but Mulgara wasn't set up for it.<br /><br />If you read the <a href="http://ant.apache.org/manual/CoreTasks/javac.html">Ant documentation on compiling</a> you'll see that you can set the target to any JDK version you like. However, that would require editing 58 files (I just had to run a quick command to see that. Wow... I didn't realize it was so bad). I'm sure I'd miss a <<code>javac</code>> somewhere. Fortunately, there is another option, even if the Ant documents discourage it. There's a system parameter called <a href="http://ant.apache.org/manual/javacprops.html#target">ant.build.java.target</a> which will set the default value globally. I checked to make sure that nothing was going to be missed by this (ie. that nothing was manually setting the target) and when it all looked good I changed the build script to set this to "1.5". I didn't change the corresponding script on Windows, but personally I only want this for distributions. Anyone who needs to set it up on Windows probably has the very JDK they want to run Mulgara on anyway.<br /><br />Well, that's my story, and I'm sticking to it.<br /><br /><h3>Semantic Universe</h3><br />What else? Oh yes. I wrote a <a href="http://www.semanticuniverse.com/blogs-common-sparql-extension.html">post for Semantic Universe</a>. It's much more technical that the other posts I've seen there, but I was told that would be OK. I'm curious to know how it will be received.<br /><br />I was interested in how it was promoted on Twitter. I wrote something that mixes linked data and SPARQL to create a kind of federated query (something I find to be very useful, BTW, and I think more people should be aware of it). However, in the process I mentioned that this shouldn't be necessary, since SPARQL 1.1 will be including a note on federated querying. Despite SPARQL 1.1 only being mentioned a couple of times, <a href="http://twitter.com/SemUni/status/13325494782">the tweet</a> said, that I discussed "how/why SPARQL 1.1 plans to be a bit more dazzling". Well, admittedly SPARQL 1.1 <em>will</em> be more dazzling, but my post didn't discuss that. Perhaps it was a hint to talk about that in a future post.<br /><br /><h3>Miscellanea</h3><br />Speaking of future posts, I realized that I've been indexing RDF backwards, at least for lists. It doesn't affect the maximum complexity of iterating a list, but it <em>does</em> affect the <em>expected complexity</em>. I won't talk about it tonight, but hopefully by mentioning it here I'll prompt myself to write about it soon.<br /><br />This last weekend was the final weekend that my in-laws were visiting from the other side of the planet, so I didn't get much jSPARQLc done. I hope to fix that tomorrow night. I'm even wondering if the Graph API should be factored out into it's own sister project. It's turning out to be incredibly useful for reading and working with RDF data when you just want access to the structure and you don't need a full query engine. It would even plug directly into almost every query engine out there, so there's a lot of utility to it.<br /><br />I'm also <em>finally</em> learning <a href="http://hadoop.apache.org/">Hadoop</a>, since I've had more pressure to consider a clustered RDF store, much as <a href="http://www.cloudera.com/blog/2010/03/how-raytheon-researchers-are-using-hadoop-to-build-a-scalable-distributed-triple-store/">BBN have created</a>. I've read the <a href="http://labs.google.com/papers/mapreduce.html">MapReduce</a>, <a href="http://labs.google.com/papers/gfs.html">GFS</a> and <a href="http://labs.google.com/papers/bigtable.html">BigTable</a> papers, so I went into it thinking I'd be approaching the problem one way, but the more I learn the more I think it would scale better if I went in other directions. So for the moment I'm trying to avoid getting too many preconceived notions of architecture until I've learnt some more and applied my ideas to some simple cases. Of course, <a href="http://hadoop.apache.org/hive/">Hive</a> tries to do the same thing for relational data, so I think I need to look at the code in that project too. I have a steep learning curve ahead of me there, but I've been avoiding those recently, so it will do me some good.<br /><br />Other than that, it's been interviews and immigration lawyers. These are horribly time consuming, and way too boring to talk about, so I won't. See you tomorrow.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com1tag:blogger.com,1999:blog-6848574.post-31636259807478707692010-04-30T09:03:00.003-05:002010-04-30T10:31:13.807-05:00<h3>Topaz and Ehcache</h3> <br />Don't ask what I did 2 days ago, because I forget. It's one of the reasons I need to blog more. I also forgot because my brain got clouded due to yesterday's tasks.<br /><br />I didn't get a lot done yesterday for the simple reason that I was filling in paperwork for an immigration lawyer. For anyone who has ever had to do this mind numbing task, they probably know that you end up filling in mostly the same things that you filled in 12 months ago, but just subtly different, so there's no possibility of using copy/paste. They will also know that getting all of the information together can take half a day. Strangely, the hardest piece of information was my mother's birthday (why do they want this? I have no idea). There is a bizarre story behind this, that I won't go into right now, but with my mother asleep in Australia there was no way to ask her. Fortunately, I had two brothers online at the time: one lives here in the USA, and the other is a student in Australia (who was up late drinking with friends, and decided to get online to say hi before going to bed). Unfortunately, neither of them new either (the well-oiled brother being completely unaware of why we didn't know).<br /><br />But I finally got it done, cleared my immediate email queue (only 65 to go!) and got down to work.<br /><br />My first task was to get the Topaz version of Mulgara up and running the same way it used to run 10 months ago. I had already tried going back through the subversion history for the project (ah, that's one of the things I did two days ago!), but with no success. However, I <em>had been</em> able to find out that others have had this error with <a href="http://www.ehcache.org/">Ehcache</a>. No one had a fix, since upgrading the library normally made their problem go away. Well I tried upgrading it myself, but without luck. Evidently the problem was in usage, but I didn't know if it was a problem in the code talking to Ehcache, or the XML configuration file that is uses. Since everything used to work without error, I figured that the code was probably OK, and that it was the configuration at fault. The complexity of the configuration file only deepened my suspicion.<br /><br />I didn't want to learn the ins-and-outs of an Ehcache configuration, so my first non-lawyer related task yesterday was to look at the code where the exception was coming from (thank goodness the Java compiler includes line numbers in class files by default). So it turned out that <a href="http://www.terracotta.org/">Terracotta</a> (the company who provides Ehcache) have a nice navigable HTML versions of all their opensource code, which made this task much more pleasant than having to get it all from Subversion. This led me to <a href="http://ehcache.org/xref/net/sf/ehcache/distribution/MulticastKeepaliveHeartbeatSender.html#180">the line</a> that was throwing the exception, which looked like:<pre><code>List localCachePeers = cacheManager.getCachePeerListener("RMI").getBoundCachePeers();</code></pre>Great, a compound statement. OK, so I use them myself, but they're annoying when you debug. Was it <code>cacheManager</code> that was <code>null</code> or was it the return value from <code>getCachePeerListener("RMI")</code>?<br /><br />At this point I jumped around in the code for a bit (I quite like those hyperlinks. I've seen them before too. I should figure out which project creates these pages), looking for what initialized cacheManager. I didn't find definitive proof that it was set, but it looked pretty good. So I looked at <a href="http://ehcache.org/xref/net/sf/ehcache/CacheManager.html#1128">getCachePeerListener("RMI")</a> and discovered that it was a lookup in a Hashmap. This is a prime candidate for returning <code>null</code>, and indeed the documentation for the method even states that it will return <code>null</code> if the scheme is not configured. Since the heartbeat code was making the presumption that it could perform an operation on the return value of this method, then the "RMI" scheme is evidently supposed to be configured in every configuration. The fact that it's possible for this method to return <code>null</code> (even if it's not supposed to) means that the calling code is not defensive enough (any kind of NullPointerException is unacceptable, even if you catch it and log it). Also, the fact that something is always supposed to be configured for "RMI" had me looking in the code to discover where listeners get registered. This turned out to come from some kind of configuration object, which looked like it had been built from an XML file.<br /><br />So the problem appears to be the combination of something that's missing from the configuration file, and a presumption that it will be there (i.e. the code couldn't handle it if the item was missing). At this point I joined a forum and described the issue, both to point out that the code should be more defensive, and also to ask what is missing. In the meantime, I tried creating my own version of the library with a fix in it, and discovered that the issue did indeed go away. Then this morning I received a message explaining what I <a href="http://ehcache.org/documentation/distributed_caching_with_rmi.html">needed to configure</a>, and also that the code now deals with the missing configuration. It still complains on every heartbeat (in 5 second intervals), but now it tells you what's wrong, and how to fix it:<pre><code>WARNING: The RMICacheManagerPeerListener is missing. You need to configure<br /> a cacheManagerPeerListenerFactory with<br /> class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"<br /> in ehcache.xml.</code></pre>Kudos to "gluck" for the quick response. (Hey, I just realized – "gluck" is from Brisbane. My home town!)<br /><br />Incidentally, creating my own version of Ehcache was problematic in itself. It's a Maven project, and when I tried to build "package" it attempted to run all the tests, which took well over an hour. Coincidentally, it also happened to be dinner time, so I came back later, only to discover that not all of the tests had passed, and that the JAR files had not been built. Admittedly, it was an older release, but it <em>was</em> a release, so I found this odd. In the end, I avoided the tests by removing the code, and running the "package" target again.<br /><br />With all the errors out of the way I went back to the Topaz system again and run it. As I said earlier, it was no longer reporting errors. But then when I tried to use queries against it, it was completely unresponsive. A little probing found that it wasn't listening for HTTP at all, so I checked the log, and sure enough:<pre><code>EmbeddedMulgaraServer> Unable to start web services due to: null [Continuing]</code></pre>Argh.<br /><br />Not only do I have to figure out what's going on here, it also appears that someone (possibly me) didn't code this configuration defensively enough! Sigh.<br /><br />At that point it was after dinner, and I had technical reading to do for a job I might have. Well, I've received the offer, but it all depends on me not being kicked out of the country.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com1tag:blogger.com,1999:blog-6848574.post-29913887904991060882010-04-26T20:52:00.004-05:002010-04-26T23:21:18.948-05:00<h3>Multitasking</h3> <br />At the moment I feel like I have too many things on the boil. I'm regularly answering <a href="http://osuosl.org/">OSU/OSL</a> about porting data over from <a href="http://topazproject.org/trac/">Topaz</a> and <a class="zem_slink" href="http://mulgara.org/" title="Mulgara (software)" rel="homepage">Mulgara</a>, I'm supposed to be getting work done on <a class="zem_slink" href="http://en.wikipedia.org/wiki/SPARQL" title="SPARQL" rel="wikipedia">SPARQL</a> Update 1.1 (which suffered last week while I looked around for a way to stay in the USA), I'm trying to track down some unnecessary sorting that is being done by Mulgara queries in a Topaz configuration, I'm trying to catch up on reading (refreshing my memory on some important Semantic Web topics so that I keep nice and current), I'm trying to find a someone who can help us not get kicked out of the country (long story), I'm responding to requests on Mulgara, and when I have some spare time (in my evenings and weekends) I'm trying to make <a href="http://code.google.com/p/jsparqlc">jSPARQLc</a> look more impressive.<br /><br />So how's it all going?<br /><br /><h3>OSU/OSL</h3><br />Well, OSU/OSL are responding slowly, which is frustrating, but also allows me the time to look at other things, so it's a mixed blessing. They keep losing my tickets, and then respond some time later apologizing for not getting back to me. However, they're not entirely at fault, as I have sudo access on out server, and could do some of this work for myself. The thing is that I've been avoiding the learning curves of Mailman and Trac porting while I have other stuff to be doing. All the same, we've made some progress lately, and I'm really hoping to switch the DNS over to the new servers in the next couple of days. Once that happens I'll be cutting an overdue release to Mulgara.<br /><br /><h3>SPARQL Update 1.1</h3><br />I really should have done some of this work already, but my job (and impending lack thereof) have interfered. Fortunately another editor has stepped up to help here, so with his help we should have it under control for the next publication round.<br /><br />The biggest issues are:<ol><li>Writing possible responses for each operation. In some cases this will simply be success/failure, but for others it will mean describing partial success. For instance, a long-running <code>LOAD</code> operation may have loaded 100,000 triples before failing. Most systems want that data to stay in there, and not roll back the change, and we need some way to report what has happened.</li><li>Dealing with an equivalent for <code>FROM</code> and <code>FROM NAMED</code> in <code>INSERT/DELETE</code> operations. Using <code>FROM</code> in a <code>DELETE</code> operation looks like this is the graph that you want to remove data from, whereas we really want to describe the list of graphs (and/or named graphs) that affect the <code>WHERE</code> clause. The last I read, the suggestion to use <code>USING</code> and <code>USING NAMED</code> instead was winning out. The problem is that no one really likes it, though they don't like every other suggestion even more. :-)</li></ol><br /><br />I doubt I'll get much done before the next meeting, but at least I did a little today, and I've been able to bring the other editor up to speed.<br /><br /><h3>Sorting</h3><br />This is a hassle that's been plaguing me for a while. A long time back <a href="http://www.plos.org/">PLoS</a> complained about queries that were taking too long (like, up to 10 minutes!). After looking at them, I found that a lot of sorting of a lot of data was going on, so I investigated why.<br /><br />From the outset, Mulgara adopted "Set Semantics". This meant that everything appeared only once. It made things a little harder to code, but it also made the algebra easier to work with. In order to accomplish this cleanly, each step in a query resolution removed duplicates. I wasn't there, so I don't know why the decision wasn't made to just leave it to the end. Maybe there was a good reason. Of course, in order to remove these duplicates, it had to order the data.<br /><br />When SPARQL came along, the pragmatists pointed out that not having duplicates was a cost, and for many applications it didn't matter anyway. So they made duplicates allowable by default, and introduced the <code>DISTINCT</code> keyword to remove them if necessary, just like SQL. Mulgara didn't have this feature (though the Sesame-Mulgara bridge hacked it to work by selecting all variables across the bridge and projecting out the ones that weren't needed), but given the cost of this sorting, it was obvious we needed it.<br /><br />The sorting in question came about because the query was a UNION between a number of product terms (or a disjunction of conjunctions). In order to make the UNION in order, each of the product terms was sorted first. Of course, without the sorting, a UNION can be a trivial operation, but with it the system was very slow. Actually, the query in question was more like a UNION between multiple products, with some of the product terms being UNIONS themselves. The resulting nested sorting was painful. Unfortunately, the way things stood, it was necessary, since there was no way to do a conjunction (product) without having the terms sorted, and since some of the terms could be UNIONS, then the result of a UNION had to be sorted.<br /><br />The first thing I did was to factor the query out into a big UNION between terms (a sum-of-products). Then I manually executed each one to find out how long it took. After I added up all the times, the total was about 3 seconds, and most of that time was spent waiting for <a href="http://lucene.apache.org/java/docs/">Lucene</a> to respond (something I have no control over), so this was looking pretty good.<br /><br />To make this work in a real query I had to make the factoring occur automatically, I had to remove the need to sort the output of a UNION, and I had to add a query syntax to TQL to turn this behavior on and off.<br /><br />The syntax was already done for SPARQL, but PLoS were using TQL through Topaz. I know that a number of people use TQL, so I wasn't prepared to break the semantics of that language, which in turn meant that I couldn't introduce a <code>DISTINCT</code> keyword. After asking a couple of people, I eventually went with a new keyword of <code>NONDISTINCT</code>. I hate it, but it also seemed to be the best fit.<br /><br />Next I did the factorization. Fortunately, Andrae had introduced a framework for modifying a query to a <a href="http://en.wikipedia.org/wiki/Fixed_point_%28mathematics%29">fixpoint</a>, so I was able to add to that for my algebraic manipulation. I also looked at other expressions, like differences (which was only in TQL, but is about to become a part of SPARQL) and Optional joins (which were part of SPARQL, and came late into TQL). It turns out that there is a lot that you can do to expand a query to a sum-of-products (or as close to as possible), and fortunately it was easy to accomplish (thanks Andrae).<br /><br />Finally, I put the code in to only do this factorization if a query was <em>not</em> supposed to be <code>DISTINCT</code> (the default in SPARQL, and if the new keyword is present for TQL). Unexpectedly, this ended up being the trickiest part. Part of the reason was because some UNION operations still needed to have the output sorted if they were embedded in an expression that couldn't be expanded out (a rare, though possible situation, but only when mixing with differences and optional joins).<br /><br />I needed lots of tests to be sure that I'd done things correctly. I mean, this was a huge change to the query engine. If I'd got it wrong, it would be a serious issue. As a consequence, this code didn't get checked in and used in the timeframe that it ought to have. But finally, I felt it was correct, and I ran my 10 minute queries against the PLoS data.<br /><br />Now the queries were running at a little over a minute. Well, this was an order of magnitude improvement, but still 30 times slower than I expected. What had happened? I checked where it was spending it's time, and it was still in a <code>sort()</code> method. Sigh. At a guess, I missed something in the code that allows sorting when needed, and avoids it the rest of the time.<br /><br />Unfortunately, the time taken to get to that point had led to other things becoming important, and I didn't pursue the issue. Also, the only way to take advantage of this change was to update Topaz to use <code>SELECT NONDISTINCT</code> but that keyword was going to fail unless being run on a new Mulgara server. This meant that I couldn't update Topaz until I knew they'd moved to a newer Mulgara, and that didn't happen for a long time. Consequently, PLoS didn't see a performance change, and I ended up trying to improve other things for them rather than tracking it down. In retrospect, I confess that this was a huge mistake. PLoS recently reminded me of their speed issues with certain queries, but now they're looking at other solutions to it. Well, it's my fault that I didn't get it all going for them but that doesn't mean I should never do it, so I'm back at it again.<br /><br />The problem queries only look really slow when executed against a large amount of data, so I had to get back to the PLoS dataset. The queries also meant running the Topaz setup, since they make use of the Topaz Lucene resolvers. So I updated Topaz and built the system.<br /><br />Since I was going to work on Topaz, I figured I ought to add in the use of <code>NONDISTINCT</code>. This was trickier than I expected, since it looked like the Topaz code was not only trying to generate TQL code, it was also trying to re-parse it to do transformations on it. The parser in question was <a href="http://www.antlr.org/">Antlr</a> which is one that I've limited experience with, so I spent quite a bit of time trying to figure out what instances of <code>SELECT</code> could have a <code>NONDISTINCT</code> appended to it. In the end, I decided that all of the parsing was really for their own <a href="http://topazproject.org/trac/wiki/Topaz/Manual/Section11#ObjectQueryLanguage">OQL</a> language (which looks a lot like TQL). I hope I was right!<br /><br />After spending way to long on Topaz, I took the latest updates from SVN, and compiled the Topaz version of Mulgara. Then I ran it to test where it was spending time in the query.<br /><br />Unfortunately, I immediately started getting regular INFO messages of the form:<pre><code>MulticastKeepaliveHeartbeatSender> Unexpected throwable in run thread. Continuing...null<br />java.lang.NullPointerException<br /> at net.sf.ehcache.distribution.MulticastKeepaliveHeartbeatSender$MulticastServerThread.createCachePeersPayload(MulticastKeepaliveHeartbeatSender.java:180)<br /> at net.sf.ehcache.distribution.MulticastKeepaliveHeartbeatSender$MulticastServerThread.run(MulticastKeepaliveHeartbeatSender.java:137)</code></pre>Now Mulgara doesn't make use of ehcache at all. That's purely a Topaz thing, and my opinion to date has been that it's more trouble than it's worth. This is another example of it. I really don't know what could be going on here, but luckily I kept open the window where I updated the source from SVN, and I can see that someone has modified the class:<pre><code> org.topazproject.mulgara.resolver.CacheInvalidator</code></pre>I can't guarantee that this is the problem, but I've never seen it before, and no other changes look related.<br /><br />But by this point I'd reached the end of my day, so I decided I should come back to it in the morning (errr, maybe that will be <em>after</em> the SPARQL Working Group meeting).<br /><br /><h3>Jetty</h3><br />Despite that describing a good portion of my day (at least, those parts not spent in correspondence), I also got a few things done over the weekend. The first of these was a request for a new feature in the Mulgara <a href="http://jetty.codehaus.org/jetty/">Jetty</a> configuration.<br /><br />One of our users has been making heavy use of the REST API (yay! That time wasn't wasted after all!) and had found that Jetty was truncating their POST methods. It turns out that Jetty restricts this to 200,000 characters by default, and it wasn't enough for them. I do have to wonder what they're sticking in their queries, but OK. Or maybe they're POSTing RDF files to the server? That might explain it.<br /><br />Jetty normally lets you define a lot of configuration with system parameters from the command line, or with an XML configuration file, and I was asked if I could allow either of those methods. Unfortunately, our embedded use of Jetty doesn't allow for either of these, but since I was shown exactly what was wanted I was able to track it down. A bit of 'grepping' for the system parameter showed me the class that gets affected. Then some Javadoc surfing took me to the appropriate interface (Context), and then I was able to go grepping through Mulgara's code. I found where we had access to these Contexts, and fortunately the Jetty configuration was located nearby. Up until this point Jetty's Contexts had not been configurable, but now they are. I only added in the field that had been requested, but everything is set up to add more with just two lines of code each - plus the XSD to describe the configuration in the configuration file.<br /><br /><h3>jSPARQLc</h3><br />My other weekend task was to add <code>CONSTRUCT</code> support to <a href="http://code.google.com/p/jsparqlc/">jSPARQLc</a>. Sure, no one is using it yet, but Java needs so much boilerplate to make SPARQL work, that I figure it will be of use to <em>someone</em> eventually – possibly me. I'm also finding it to be a good learning experience for why JDBC is the wrong paradigm for SPARQL. I'm not too worried about that though, as the boilerplate stuff is all there, and it would be could easy to clean it up to something that doesn't try to conform to SPARQL. But for the moment it's trying to make SPARQL look like JDBC, and besides there's already another library that isn't trying to look like JDBC. I'd better stick to my niche.<br /><br />I've decided that I'm definitely going to go with StAX to make forward-only result sets. However, I'm not sure if there is supposed to be a standard configuration for JDBC to set the desired form of the result set, so I haven't started on that yet.<br /><br />The result of a <code>CONSTRUCT</code> is a graph. By default we can expect a RDF/XML document, though other formats are certainly possible. I'm not yet doing content negotiation with jSPARQLc, though that may need to be configurable, so I wanted to keep an open mind about what can be returned. That means that standard <code>SELECT</code> queries could return <a href="http://www.w3.org/TR/rdf-sparql-XMLres/">SPARQL Query Result XML</a> or <a href="http://www.w3.org/TR/rdf-sparql-json-res/">JSON</a>, and <code>CONSTRUCT</code> queries could result in RDF/XML, <a href="http://www.w3.org/DesignIssues/Notation3">N3</a>, or <a href="http://n2.talis.com/wiki/RDF_JSON_Specification">RDF-JSON</a> (Mulgara supports all but the last, but maybe I should add that one in. I've already left space for it).<br /><br />Without content negotiation, I'm keeping to the XML formats for the moment, with the framework looking for the other formats (though it will report that the format is not handled). Initially I thought I might have to parse the top of the file, until I cursed myself for an idiot and looked up the content type in the header. Once the parameters have been removed, I could use the content type to do a "look up" for a parser constructor. I like this approach, since it means that any new content types I want to handle just become new entries in the look-up table.<br /><br />This did leave me wondering if every SPARQL endpoint was going to fill in the Content-Type header, but I presume they will. I can always try a survey of servers once I get more features into the code.<br /><br />Parsing an RDF/XML graph is a complex process that I had no desire to attempt (it could take all week to get it right - if not longer). Luckily, Jena has the ARP parser to do the job for me. However, the ARP parser is part of the main Jena jar, which seemed excessive to me. Fortunately, Jena's license is BSD, so it was possible to bring the ARP code in locally. I just had to update the packages to make sure it wouldn't conflict if anyone happens to have their own Jena in the classpath.<br /><br />Funnily enough, while editing the ARP files (I'm doing this project "oldschool" with <a href="http://www.vim.org/">VIM</a>). I discovered copyright notices for Plugged In Software. For anyone who doesn't know, Plugged In Software was the company that created the Tucana Knowledge Store (later to be open sourced as Kowari, then renamed to Mulgara). The company changed its name later on to match the software, but this code predated that. Looking at it, I seem to recall that the code in question was just a few bugfixes that Simon made. But it was still funny to see.<br /><br />Once I had ARP installed, I could parse a graph, but into what? I'm not trying to run a triplestore here, just an API. So I reused an interface I came up with when I built my SPARQL test framework when I needed to read a graph. The interface isn't fully indexed, but it lets you do a lot of useful things if you want to navigate around a graph. For instance, it lets you ask for the list of properties on a subject, or to find the value(s) of a particular subject's property, or to construct a list from an RDF collection (usually an iterative operation). Thinking that I might also want to ask questions about particular objects (or even predicates) I've added in the other two indexes this time, but I'm in two minds about whether they really need to be there.<br /><br />The original code for my graph interface was in Scala, and I was tempted to bring it in like this. But one of the purposes of this project was to be lightweight (unfortunately, I lost that advantage when I discovered that ARP needs <a href="http://xerces.apache.org/">Xerces</a>), so I thought I should try to avoid the Scala JARs. Also, I thought that the exercise of bringing the Scala code into Java would refresh the API for me, as well as refresh me on Scala (which I haven't used for a couple of months). It did all of this, as well as having the effect of reminding me why Scala is so superior to Java.<br /><br />Anyway, the project is getting some meat to it now, and it's been fun to work on in my evenings, and while I've been stuck indoors on my weekends. If anyone has any suggestions for it, then please feel free to let me know.<br /><br /><div style="margin-top: 10px; height: 15px;" class="zemanta-pixie"><a class="zemanta-pixie-a" href="http://reblog.zemanta.com/zemified/9128f86b-b7c9-45cc-878a-b12d0c138756/" title="Reblog this post [with Zemanta]"><img style="border: medium none; float: right;" class="zemanta-pixie-img" src="http://img.zemanta.com/reblog_e.png?x-id=9128f86b-b7c9-45cc-878a-b12d0c138756" alt="Reblog this post [with Zemanta]"></a><span class="zem-script more-related pretty-attribution"><script type="text/javascript" src="http://static.zemanta.com/readside/loader.js" defer="defer"></script></span></div>Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com1tag:blogger.com,1999:blog-6848574.post-672214955402291722010-04-24T16:39:00.003-05:002010-04-24T20:49:09.949-05:00<h3>Program Modeling</h3> <br />The rise of <a href="http://en.wikipedia.org/wiki/Model-driven_architecture">Model Driven Architecture</a> in development, has created a slew of modeling frameworks that can be used as tools for this design process. While some MDA tools involve a static design and a development process, others seek to have an interactive model that allows developers to work with the model at runtime. Since an RDF store allows models developed in RDFS and OWL to be read and updated at runtime, it therefore seems natural to use the modeling capabilities of RDFS and OWL to provide yet another framework for design and development. RDF also has the unusual characteristic of seamlessly mixing instance data with model data (the so-called "ABox" and "TBox"), giving the promise of a system that allows both dynamic modeling and persistent data all in a single integrated system. However, there appear to be some common pitfalls that developers fall prey to, which make this approach less useful than it might otherwise be.<br /><br />For good or ill, two of the most common languages used on large scale projects today are Java and C#. Java in particular also has good penetration on the web, though for smaller projects more modern languages, such as Python or Ruby, are more often deployed. There are lots of reasons for Java and C# to be so popular on large projects: They are widely known and it can be easy to assemble a team around it; both the JVM and .Net engines have demonstrated substantial benefits in portability, memory management and optimization through JIT compiling; they have established a reputation of stability over their histories. Being a fan of functional programming and modern languages, I often find Java to be frustrating, but these strengths often bring me back to Java again, despite its shortcomings. Consequently, it is usually with Java or C# in mind that MDA projects start out trying to use OWL modeling.<br /><br />On a related note, enterprise frameworks regularly make use of <a href="http://www.hibernate.org/">Hibernate</a> to store and retrieve instance data using a relational database (RDBMS). Hibernate maps object definitions to an equivalent representation in a database table using an Object-Relational Mapping (ORM). While not a formal MDA modeling paradigm, a relational database schema forms a model, in the same way that UML or MOF does (only less expressive). While an ORM is not a tool for MDA, it nevertheless represents the code of a project in a form of model, with instance data that is interactive at runtime.<br /><br />Unfortunately, the ORM approach offers a very weak form of modeling, and it has no capability to dynamically update at runtime. Several developers have looked at this problem and reasoned that perhaps these problems could be solved by modeling in RDF instead. After all, an RDF store allows a model to be updated as easily as the data, and the expressivity of OWL is far greater than that of the typical RDBMS schema. To this end, we have seen a few systems which have created an Object-Triples Mapping (OTM), with some approaches demonstrating more utility than others.<br /><br /><h3>Static Languages</h3><br />When OTM approaches are applied to Java and C#, we typically see an RDFS schema that describes the data to be stored and retrieved. This can be just as effective as an ORM on a relational database, and has the added benefit of allowing expansions of the class definitions to be implemented simply as extra data to be stored, rather than the whole change management demanded with the update of a RDBMS. An RDF store also has the benefit of being able to easily annotate the class instances being stored, though this is not reflected in the Java/C# classes and requires a new interface to interact with it. Unfortunately, the flow of modeling information through these systems is typically one-way, and does not make use of the real power being offered by RDF.<br /><br />ORM in Hibernate embeds the database schema into the programming language by mapping table schemas to class descriptions in Java/C# and table entries to instances of those classes in runtime memory. Perhaps through this example set by ORM, we often see OTM systems mapping OWL classes to Java/C# classes, and RDF instances to Java/C# instances. This mapping seems intuitive, and it has its uses, but it is also fundamentally flawed.<br /><br />The principle issue with OTM systems that attempt to embed themselves in the language, is that static languages (like the popular Java and C# languages) are unable to deal with arbitrary RDF. RDF and OWL work on an Open World Assumption, meaning that there may well be more of the model that the system is not yet aware of, and should be capable of taking into consideration. However, static languages are only able to update class definitions outside of runtime, meaning that they cannot accept new modeling information during runtime. It <em>is</em> possible to define a new class at runtime using a bytecode editing library, but then the class may only be accessed through meta-programming interfaces like reflection, defeating the purpose of the embedding in the first place. This is what is meant by the flow of modeling information being one-way: updates to the program code can be dynamically handled by a model in an RDF store, but updates to the model cannot be reflected by corresponding updates in the program.<br /><br />But these programming languages are Turing Complete. We ought to be able to work with dynamic modeling in triples with static languages, so how do we approach it? The solution is to abandon the notion of embedding the model into the language. These classes are not dynamically reconfigurable, and therefore they cannot update with new model updates. Instead, object structures that <em>can</em> be updated can be used to represent the model. Unfortunately, this no longer means that the model is being used to model the programming code (as desired in MDA), but it does mean that the models are now accurate, and can represent the full functionality being expressed in the RDFS/OWL model.<br /><br />As an example, it is relatively easy to express an instance of a class as a Java Map, with properties being the keys, and the "object" values being the values in the map. This is exactly the same as the way structures are expressed in Perl, so it should be a familiar approach to many developers. These instances should be constructed with a factory that takes a structure that contains the details of an OWL class (or, more likely, that subset of OWL that is relevant to the application). In this way it is possible to accurately represent any information found in an RDF store, regardless of foreknowledge. I can personally attest to the ease and utility of this approach, having written a version of it over two nights, and then providing it to a colleague who used it along with Rails to develop an impressive prototype ontology and data editor, complete with rules engine, all in a single day. I expect others can cite similar experiences.<br /><br /><h3>Really Embedding</h3><br />So we've seen that static languages like C# and Java can't dynamically embed "Open" models like RDF/OWL, but are there languages that can?<br /><br />In the last decade we've seen a lot of "dynamic" languages gaining popularity, and to various extents, several of these offer that functionality. The most obvious example is <a href="http://www.ruby-lang.org/en/">Ruby</a>, which has explicit support for opening up already defined classes in order to add new methods, or redefine existing ones. Smalltalk has <a href="http://coweb.cc.gatech.edu/cs2340/6243">Metaprogramming</a>. Meta-programming isn't an explicit feature for many other languages, but so long as the language is dynamic there is often a way, such as these methods for <a href="http://blog.ianbicking.org/2007/08/08/opening-python-classes/">Python</a> and <a href="http://transfixedbutnotdead.com/2010/01/14/anyone-for-perl-6-metaprogramming/">Perl</a>.<br /><br />Despite the opportunity to embed models into these languages, I'm unaware of any systems which do so. It seems that the only forms of OTM I can find are in the languages which already have an ORM, and are probably better used with that paradigm. There are numerous libraries for accessing RDF in each of the above languages, such as the Redland RDF Language bindings for <a href="http://librdf.org/docs/ruby.html">Ruby</a> and <a href="http://librdf.org/docs/python.html">Python</a>, the <a href="http://raa.ruby-lang.org/project/rena/">Rena</a>, and <a href="http://activerdf.org/">ActiveRDF</a> in Ruby, <a href="http://www.rdflib.net/">RDFLib</a> and <a href="http://infomesh.net/pyrple/">pyrple</a> in Python, <a href="http://www.perlrdf.org/">Perl RDF</a>... the list goes and continues to grow. But none of the libraries I know perform any kind of language embedding in a dynamic language. My knowledge in this space is not exhaustive, but the lack of obvious candidates tells a story on its own.<br /><br />Is this indicative of dynamic languages not needing the same kind of modeling storage that static languages seem to require? Java and C# often used Hibernate and similar systems in large scale commercial applications with large development teams, while dynamic languages are often used by individuals or small groups to quickly put together useful systems that aim at a very different target market. But as commercial acceptance of dynamic languages develops further, perhaps this kind of modeling would be useful in future. In fact a good modeling library like this could well show a Java team struggling with their own version of an OTM, just what they've been missing in their closed world.<br /><br /><h3>Topaz</h3><br />I wrote this essay partly out of frustration with a system I've worked with called <a href="http://www.topazproject.org/trac/">Topaz</a>. Topaz is a very clever piece of OTM code, written for Java, and built over my own project <a href="http://mulgara.org/">Mulgara</a>. However, Topaz suffers from all of the closed world problems I outlined above, without any apparent attempt to mitigate the use of RDF by reading extra annotations, etc. It has been used by the <a href="http://www.plos.org/">Public Library of Science</a> for their data persistence, but they have been unhappy with it, and it will soon be replaced.<br /><br />While performance in Mulgara (something I'm working on), in Topaz's use of Mulgara, and in Topaz itself, has been an issue, I believe that a deeper problem lay in the use of a dynamic system to represent static data. My knowledge of Topaz has me wondering why the system didn't simply choose to use Hibernate. I'm sure the used of RDF and OWL provided some functionality that isn't easy accomplished by Hibernate, but I don't see the benefits being strong enough to make it worth the switch to a dynamic model.<br /><br />For my money, I'd either adopt the standard static approach that so many systems already employ to great effect, or go the whole hog and design an OTM system that is truly dynamic and open.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com1tag:blogger.com,1999:blog-6848574.post-69505814141876663992010-04-21T21:26:00.003-05:002010-04-21T22:43:51.857-05:00<h3>Work</h3> I've had a number of administrative things to get done this week, since work will be taking a dramatic new turn soon. I've been missing working in a team, so that part will be good, but there are too many unknowns right now, including a visa nightmare that has been unceremoniously dumped in my lap. So, I'm stressed and have a lot to do. But that doesn't mean I'm not working.<br /><br /><h3>Multi-Results</h3><br />I'd recently been asked to allow the HTTP interfaces to return multiple results. One look at the <a href="http://www.w3.org/TR/rdf-sparql-XMLres/">SPARQL Query Results XML Format</a> makes it clear that SPARQL isn't capable of it, but the TQL XML format has always allowed it - or at least, I think it did. The SPARQL structure is sort of flat, with a declaration of variables at the top, and bindings under it. The TQL structure is similar, but embeds it all in another element called a "query". That name seems odd (since it's a result, not a query), so I wonder if someone had intended to include the original query as an attribute of that tag. Anyway, the structure is available, so I figured I should add it.<br /><br />This was a little trickier than I expected, since I'd tried to abstract out the streaming of answers. This means that I could select the output type simply by using a different streaming class. For now, the available classes are SPARQL-XML, SPARQL-JSON and TQL-XML, but there could easily be others. However, I now had to modify all of those classes to handle multiple answers. Of course, the SPARQL streaming classes had to ignore them, while the TQL class didn't, but that wasn't too hard. However, I came away feeling that it was somehow messier than it ought to have been. Even so, I thought it worked OK.<br /><br />One bit of complexity was in handling the GET requests of TQL vs. SPARQL. In SPARQL we can only expect a single query in a GET, but TQL can have multiple queries, separated by semicolons. While I like to keep as much code as possible common to all the classes, in the end I decided that the complexity of doing this was more than it was worth, and I put special multi-query-handling code in the TQL servlet.<br /><br />All of this was done a little while ago, but because I was waiting on responses on the mulgara.org move, I decided not to release just yet. This was probably fortunate, since I got an email the other day explaining that subqueries were not being embedded properly. They were starting with a new <code>query</code> element tag, but not closing with them. However, these tags should not have appeared at this level at all. The suggested patch would have worked, but it relied on checking the indentation used for pretty-printing in order to find out if the <code>query</code> element should be opened. This would work, but was covering the problem, rather than solving it. A bit of checking, and I realized that I had code to send a header for each answer, code to send the data for the answer, but no code for the "footer". The footer would have been the closing tag for the <code>query</code> element, and this was being handled in other code, meaning that it only came up at the top level, and not in the embedded sub-answers. This in turn meant that it wasn't always matching up to the header. So I introduced a footer method for answers (a no-op in SPARQL-XML and SPARQL-JSON) which cleaned up the process well enough that avoiding the header (and footer) on sub-answers was now easy to see and get right.<br /><br />So was I done? No. The email also commented on warnings of transactions not being closed. So I went looking at this, and decided that all answers were being closed properly. In confusion, I looked at the email again, and this time realized that the bug report said that they were using <code>POST</code> methods. Since I was only dealing with queries (and not update commands) I had only gone to the <code>GET</code> method. So I looked at <code>POST</code>, and sure enough it was a dogs breakfast.<br /><br />Part of the problem with a <code>POST</code> is that it can include updates as well as queries. Not having a standard response for an update, I had struggled a little with this in the past. In the end, I'd chosen to only output the final result of all operations, but this was causing all sorts of problems. For a start, if there was more than one query, then only the last would be shown (OK in SPARQL, not in TQL). Also, since I was ignoring so many things, it meant that I wasn't closing anything if it needed it. This was particularly galling to have wrong, since I'd finally added SPARQL support for <code>POST</code> queries.<br /><br />I'd really have liked to use the same multi-result code that I had for <code>GET</code> requests, but that didn't look like it was going to mix well with the need to support commands in the middle. In the end I copied/pasted some of the GET code (shudder) and fixed it up to deal with the result lists that I'd already built through the course of processing the <code>POST</code> request. It doesn't look too bad, and I've commented on the redundancy and why I've allowed it, so I think it's all OK. Anyway, it's all looking good now. Given that I also have a major bugfix from a few weeks back, then I should get it out the door despite the mulgara.org shuffle not being done.<br /><br />I didn't mention that major bug, did I? For anyone interested, some time early last year a race bug was avoided by putting a lock into the transaction code. Unfortunately, that lock was to broad, and it prevented any other thread from reading while a transaction was being committed. This locked the database up during large commit operations. It's not the sort of thing that you're likely to see with unit tests, but I was still deeply embarrassed. At least I found it (a big thanks to the guys at PLoS for reporting this bug, and helping me find where it was).<br /><br />So before I get dragged into any admin stuff tomorrow morning (office admin or sysadmin), I should try to cut a release to clean up some of these problems.<br /><br />Meanwhile, I'm going to relax with a bit of Hadoop reading. I once talked about putting a triplestore on top of this, and it's an idea that's way overdue. I know others have tried exactly this, but each approach has been different, and I want to see what I can make of it. But I think I need a stronger background in the subject matter before I try to design something in earnest.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-30123912757395542472010-04-19T11:03:00.003-05:002010-04-19T20:17:34.202-05:00<h3>jSPARQLc</h3> <br />I spent Sunday and a little bit of this afternoon finishing up the code and testing for the SPARQL API. The whole thing had just a couple of typos in it, which surprised me no end, because I used VIM and not Eclipse. I must be getting more thorough as I age. :-)<br /><br />Anyway, the whole thing works, though its limited in scope. To start with, the only data accessing methods I wrote were <code>getObject(int)</code> and <code>getObject(String)</code>. Ultimately, I'd like to include many of the other <code>get...</code> methods, but these would require tedious mapping. For instance, <code>getInt()</code> would need to map <code>xsd:int</code>, <code>xsd:integer</code>, and all the other integral types to an integer. I've done this work in some of the internal APIs for Mulgara (particularly when dealing with SPARQL), so I know how tedious it is. I suppose I can copy/paste a good portion of it out of Mulgara (it's all Apache 2.0), but for the moment I wanted to get it up and running the way I said.<br /><br />If I were to do all of this, then there are all sorts of mappings that can be done between data types. For instance, <code>xsd:base64Binary</code> would be good for Blobs. I could even introduce some custom data types to handle things like serialized Java objects, with a data type like: <code>java:org.example.ClassName</code>. Actually, that looks familiar. I should see if anyone has done it.<br /><br />Anyway, as I progressed, I found that while it was straight forward enough to get basic functionality in, the JDBC interfaces are really inappropriate.<br /><br />To start with, JDBC usually accesses a "cursor" at the server end, and this is accessed implicitly by <code>ResultSet</code>. It's not absolutely necessary, but moving backwards and forwards through a result that isn't entirely in memory really does need a server-side cursor. Since I'm doing everything in memory right now, then I was able to do an implementation that isn't <code>TYPE_FORWARD_ONLY</code>, but if I were to move over to using StAX (in the comments from my last entry) then I'd have to fall back to that.<br /><br />The server-side cursor approach also makes it possible to write to a <code>ResultSet</code>, since SQL results are closely tied to the tables they represent. However, this doesn't really apply to RDF, since statements can never be updated, only added and removed. SPARQL Update is coming (I ought to know, as I'm the editor for the document), but there is no real way to convert the update operations on a <code>ResultSet</code> back into SPARQL-Update operations over the network. It might be theoretically, possible but it would need a lot of communication with the server, and it doesn't even make sense. After all, you'd be trying to map one operation paradigm to a completely different one. Even if it could be made to work, it would be confusing to use. Since my whole point in writing this API was to simplify things for people who are used to JDBC, then it would be self defeating.<br /><br />So if this API were to allow write operations as well, then it would need a new approach, and I'm not sure what that should be. Passing SPARQL Update operations straight through might be the best bet, though it's not offering a lot of help (beyond doing all the HTTP work for you).<br /><br />The other thing that I noticed was that a blind adherence to the JDBC approach created a few classes that I don't think are really needed. For instance, the <code>ResultSetMetaData</code> interface only contains two methods that make any sense from the perspective of SPARQL: <code>getColumnCount()</code> and <code>getColumnName()</code>. The data comes straight out of the ResultSet, so I would have put them there if the choice were mine. The real metadata is in the list of "link" elements in the result set, but this could encoded with anything (even text) so there was no way to make that metadata fit the JDBC API. Instead, I just let the user ask for the last of links directly (creating a new method to do so).<br /><br />Another class that didn't make too much sense to me was <code>Statement</code>. It's a handy place to record some state about what you've doing on a <code>Connection</code>, but other than that, it just seems to proxy the <code>Connection</code> it's attached to. I see there some options for caching (that I've never used myself when I've been on JDBC), so I suspect that it does more than I give it credit for, but for the moment it just appears to be an inconvenience.<br /><br />Anyway, I thought I'd put it up somewhere, and since I haven't tried Google's code repository before, I've put it up there. It's a Java SPARQL Connectivity library, so for lack of anything better I called it <a href="http://code.google.com/p/jsparqlc/">jSPARQLc</a> (maybe JRC for Java RDF Connectivity would have been better, but there are lots of JRC things out there, but jSPARQLc didn't return any hits from Google, so I went with that). It's very raw and has very little configuration, but it passes it's tests. :-)<br /><br />Speaking of tests, if you want to try it, then the connectivity tests won't pass until you've done the following:<ul><li>Start a SPARQL endpoint at http://localhost:8080/sparql/ (the sourcecode in the test needs to change if your endpoint is elsewhere).</li><li>Create a graph with the URI <code><test:data></code></li><li>Loaded the data in <em>test.n3</em> up into it (this file is in the root directory)</li></ul>I know I shouldn't have hardcoded some of this, but it was just a test on a 0.1 level project. If it seems useful, and/or you have ideas for it, then please let me know.<br /><br /><h3>Mulgara</h3><br />Other than this, I have some administration I need to do to get both Mulgara and Topaz onto another server, and this seems to slow everything down as I wait for the admins to get back to me. It's also why there hasn't been a Mulgara release recently, even though it's overdue. However, I just got a message from an admin today, so hopefully things have progressed. Even so, I think I'll just cut the release soon anyway.<br /><br />One fortunate aspect of the delayed release was a message I got from David Smith about how some resources aren't being closed in the TQL REST interface (when subqueries are used). He's sent me a patch, but I need to spend some time figuring out why I got this wrong, else I could end up hiding the real problem. That's a job for the morning... right after the SPARQL Working Group meeting. Once all of that is resolved, I'll get a release out, and try to figure out what I can do to speed up the server migration.<br /><br />Oh, and I need to update Topaz to take advantage of some major performance improvements in Mulgara, and then I need to find even more performance improvements. Hopefully I'll be onto some of that by the afternoon, but I don't want to promise the moon only to come back tomorrow night and confess I got stuck on the same thing all day.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-27688120352857232922010-04-17T19:08:00.003-05:002010-04-17T20:46:55.618-05:00<h3>SPARQL API</h3> Every time I try to use SPARQL with Java I keep running into all that minutiae that Java makes you deal with. The <a href="http://hc.apache.org/">HttpComponents from Apache</a> make things easier, but there's still a loto f code that has to be written. Then after you get your data back, you still have to process it, which means XML or JSON parsing. All of this means a lot of code, just to get a basic framework going.<br /><br />I know there are a lot of APIs out there for working with RDF engines, but there aren't many for working directly over SPARQL. I eventually had a look and found <a href="http://sparql.sourceforge.net/">SPARQL Engine for Java</a>, but this seemed to have more client-side processing than I'd expect. I haven't looked too carefully at it, so this may be incorrect, but I thought it would be worthwhile to take all the boilerplate I've had to put together in the past, and see if I can glue it all together in some sensible fashion. Besides, today's Saturday, meaning I don't have to worry about my regular job, and I'm recovering from a procedure yesterday, so I couldn't do much more than sit at the computer anyway.<br /><br />One of my inspirations was a conversation I had with <a href="http://bblfish.net/">Henry Story</a> (hmmm, Henry's let that link get badly out of date) a couple of years ago about a standard API for RDF access, much like <a href="http://java.sun.com/javase/6/docs/technotes/guides/jdbc/">JDBC</a>. At the time I didn't think that Sun could make something like that happen, but if there were a couple of decent attempts at it floating around, then some kind of pseudo standard could emerge. I never tried it before, but today I thought it might be fun to try.<br /><br />The first thing I remembered was that when you write a library, you end up writing all sorts of tedious code while you consider the various ways that a user might want to use it. So I stuck to the basics, though I did add in various options as I dealt with individual configuration options. So it's possible to set the <em>default-graph-uri</em> as a single item as well as with a list (since a lot of the time you only want to set one graph URI). I was eschewing Eclipse today, so I ended up making use of VIM macros for some of my more tedious coding. The tediousness also reminded me again why I like Scala, but given that I wanted it to look <em>vaguely</em> JDBC-like, I figured that the Java approach was more appropriate.<br /><br />I remember that TKS (the name of the first incarnation of the Mulgara codebase) had attempted to implement JDBC. Apparently, a good portion of the API, was implemented, but the there were some elements that just didn't fit. So from the outset I avoided trying to duplicate that mistake. Instead, I decided to cherry pick the most obvious features, abandon anything that doesn't make sense, and add in a couple of new features where it seems useful or necessary. So while some of it might <em>look</em> like JDBC, it won't have anything to do with it.<br /><br />I found a piece of trivial JDBC code I'd used to test something once-upon-a-time, and tweaked it a little to look like something I might try to do with SPARQL. My goal was to write the library that would make this work, and then take it from there. This is the example:<br /><pre><code> final String ENDPOINT = "http://localhost:8080/sparql/";<br /> Connection c = DriverManager.getConnection(ENDPOINT);<br /><br /> Statement s = c.createStatement();<br /> s.setDefaultGraph("test:data");<br /> ResultSet rs = s.executeQuery("SELECT * WHERE { ?s ?p ?o }");<br /> rs.beforeFirst();<br /> while (rs.next()) {<br /> System.out.println(<br /> rs.getObject(1).toString() + ", " +<br /> rs.getObject(2) + ", " +<br /> rs.getObject(3));<br /> }<br /> rs.close();<br /> c.close();<br /></code></pre><br /><br />My first thought was that this is not how I would design the API (the "Statement" seems a little superfluous), but that wasn't the point.<br /><br />Anyway, I've nearly finished it, but I'm dopey from pain medication, so I thought I'd write down some thoughts about it, and pick it up again in the morning. So if anyone out there is reading this (which I doubt, given how little I write here) these notes are more for me than for you, so don't expect to find it interesting. :-)<br /><br /><h3>Observations</h3><br />The first big difference to JDBC is the configuration. A lot of JDBC is either specific to a particular driver, or to RDBM Systems in general. This goes for the structure of the API as well as the configuration. For instance, <code>ResultSet</code> seems to be heavily geared towards cursors, which SPARQL doesn't support. I was momentarily tempted to try emulating this functionality through LIMIT and OFFSET, but that would have involved a lot of network traffic, and could potentially interfere with the user trying to use these keywords themselves. Getting the row number (<a href="http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#getRow()">getRow</a>) would have been really tricky if I'd gone that way too.<br /><br />But ResultSet was one of the last things I worked on today, so I'll rewind.<br /><br />The first step was making the HTTP call. I usually use GET, but I've recently added in the <a href="http://www.w3.org/TR/rdf-sparql-protocol/#query-bindings-http">POST binding</a> for SPARQL querying in Mulgara , so I made sure the client code can do both. For the moment I'm automatically choosing to do a POST query when the URL gets above 1024 characters (I believe that was the URL limit for some version of IE), but I should probably make the use of POST vs. GET configurable. Fortunately, building parameters was identical for both methods, though they get put into difference places.<br /><br />Speaking of parameters, I need to check this out, but I believe that graph URIs in SPARQL do not get encoded. Now that's not going to work if they contain their own queries (any why wouldn't they), but most graphs don't do that, so it's never bitten me before. Fortunately, doing a URL-Decode on an unencoded graph URI is usually safe, so that's how I've been able to get away with it until now. But as a client that has to do the encoding I needed to think more carefully about it.<br /><br />From what I can tell, the only part that will give me grief is the query portion of the URI. So I checked out the query, and if there wasn't one, I just sent the graph unencoded. If there was one, then I'd encode just the query, add it to the URI, and then see if decoding got me back to the original. If it does, then I send that. Otherwise, I just encode the whole graph URI and send that. As I write it down, it looks even more like a hack than ever, but so far it seems to work.<br /><br />So now that I have all the HTTP stuff happening, what about the response? Since answers can be large, my first thought was <a href="http://java.sun.com/javase/6/docs/api/org/xml/sax/package-summary.html">SAX</a>. Well, actually, my first thought was Scala, since I've already parsed SPARQL response documents with Scala's XML handling, and it was trivial. But I'm Java so that means SAX or DOM. SAX can handle large documents, but possibly more importantly, I've always found SAX easier to deal with than DOM, so that's the way I went.<br /><br />Because SAX operates on a stream, I thought I could build a stream handler, but I think that was just the medication talking, since I quickly remembered that it's an event model. The only way I could do it as a stream would be if I buit up a queue with one thread writing at one end and the consuming thread taking data off at the other. That's possible, but it's hard to test if it scales, and if the consumer doesn't get drain the queue in a timely manner, then you can cause problems for the writing end as well. It's possible to slow up the writer by not returning from the even methods until the queue has some space, but that seems clunky. Also, when you consider that a ResultSet is supposed to be able to rewind and so forth, a streaming model just doesn't work.<br /><br />In the end, it seemed that I would have to have my ResultSets in memory. This is certainly easier that any other option I could think of, and the size of RAM these days means that it's not really a big deal. But it's still in the back of my mind that maybe I'm missing an obvious idea.<br /><br />The other thing that came to mind is to create an API to provides object events in the same way that SAX provides events for XML elements. This would work fine, but it's nothing like the API I'm trying to look like, so I didn't give that any serious thought.<br /><br />So now I'm in the midst of a SAX parser. There's a lot of work in there that I don't need when working with other languages, but it does give you a comfortable feeling knowing that you have such fine-grained control over the process, Java enumerations have come in handy here, as I decided to go with a state-machine approach. I don't use this very often (outside of hardware design, where I've always liked it), but it's made the coding so straightforward it's been a breeze.<br /><br />One question I have, is if the parser should create a ResultSet object, or if it should <em>be</em> the object. It's sort of easy to just create the object with the InputStream as the parameter for the constructor, but then the object you get back could be either a boolean result or a list of variable bindings, and you have to interrogate it to find out which one it is. The alternative is to use a factory that returns different types of result sets. I initially went with the former because both have to parse the header section, but now that I've written it out, I'm thinking that the latter is the better way to go. I'll change it in the morning.<br /><br />I'm also thinking of having a parser to deal with JSON (I did some abstraction to make this easy), but for now I'll just take one step at a time.<br /><br />One issue I haven't given a lot of time to yet is the CONSTRUCT query. These have to return a graph and not a result set. That brings a few questions to mind:<ul><li>How do I tell the difference? I don't want to do it in the API, since that's something the user may not want to have to figure out. But short of having an entire parser, it could be difficult to see the form of the query before it's sent.</li><li>I can wait for the response, and figure it out there, but then my SAX parser needs to be able to deal with RDF/XML. I usually use Jena's parser for this, since I know it's a lot of work. Do I really want to go that way? Unfortunately, I don't know of any good way to move to a different parser once I've seen the opening elements. I <em>could</em> try a <a href="http://java.sun.com/javase/6/docs/api/java/io/BufferedInputStream.html">BufferedInputStream</a>, so I could rewind it, but can that handle really large streams? I'll think on that.</li><li>How do I represent the graph at the client end?</li></ul>Representing a graph goes way beyond ResultSet, and poses the question of just how far to go. A simple list of triples would probably suffice, but if I have a graph then I usually want to do interesting stuff with it.<br /><br />I'm thinking of using my normal graph library, which isn't out in the wild yet, but I find it very useful. I currently have implementations of it in Java, Ruby and Scala. I keep re-implementing it whenever I'm in a new language, because it's just so useful (it's trivial to put under a Jena or Mulgara API too). However, it also goes beyond the JDBC goal that I was looking for, so I'm cautious about going that way.<br /><br />Anyway, it's getting late on a Saturday night, and I'm due for some more pain medication, so I'll leave it there. I need to talk to people about work again, so having an active blog will be important once more (even if it makes me look ignorant occasionally). I'll see if I can keep it up.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com4tag:blogger.com,1999:blog-6848574.post-30406192517962377222009-05-21T13:15:00.004-05:002009-05-21T18:15:07.306-05:00<h3>Federated Queries</h3> A long time ago, TKS supported federated queries, though the approach was a little naive (bring all the matches of triple patterns in to a single place, and join them there). Then a few years ago I added this to Mulgaraas well. I've always wanted to make it more intelligent in order to reduce network bandwidth, but at the same time, I was always happy that it worked. Unfortunately, it was all accomplished through RMI, and was Mulgara specific. That used to be OK, since RDF servers didn't have standardized communications mechanisms, but that changed with SPARQL.<br /><br />More recently, I've started running across distributed queries through another avenue. While working through the SPARQL protocol, I realized that the Mulgara approach of treating unknown HTTP URIs as data that can be retrieved can be mixed with SPARQL CONSTRUCT queries encoded into a URI. The result of an HTTP request on a SPARQL CONSTRUCT query is an RDF document, which is exactly what Mulgara is expecting when it does an HTTP GET on a graph URI. The resulting syntax is messy, but it works quite well. Also, while retrieving graph URIs is not standard in SPARQL, most systems implement this, making it a relatively portable idiom. I was quite amused at the exclamations of surprise and horror (especially the horror) when I <a href="http://lists.w3.org/Archives/Public/semantic-web/2009May/0029.html">passed this along on a mailing list</a> a few weeks ago.<br /><br />The ease at which this was achieved using SPARQL made me consider how federated querying might be done using a SPARQL-like protocol. Coincidentally, the <a href="http://www.w3.org/2009/sparql/wiki/Main_Page">SPARQL Working Group</a> has <a href="http://www.w3.org/2009/sparql/wiki/Feature:BasicFederatedQuery">Basic Federated Queries</a> as a proposed feature, and now I'm starting to see a lot of people asking about it on mailing lists (was people always asking about this, or am I just noticing it now?). I'm starting to think this feature may be more important in SPARQL, and think that perhaps I should have made it a higher priority when I voted on it. As it is, it's in the list of things we'll get to if we have time.<br /><br />Then, while I was thinking about this, one of the other Mulgara developers tells me that he absolutely has to have distributed queries (actually, he needs to run rules over distributed datasets) to meet requirements in his organization. Well, the existing mechanisms will sort of work for him, but to do it right it should be in SPARQL.<br /><br /><h3>Requirements</h3> So what would I want to see in federated SPARQL queries? Well, as an implementer I need to see a few things:<br /><ol><li>A syntactic mechanism for defining the URI of a SPARQL endpoint containing the graph(s) to be accessed.</li><li>A syntactic mechanism for defining the query to be made on that endpoint (a subquery syntax would be fine here).</li><li>A means of asking the size of a query result.</li><li>A mechanism for passing existing bindings along with a query.</li></ol><br /><br />The first item seemed trivial until I realized that SPARQL has no standard way of describing an endpoint. Systems like Mulgara simply use <code>http://hostname/sparql/</code>, which provides access to the entire store (everything can be referred to using HTTP parameters, such as <code>default-graph-uri</code> and <code>query</code>). On the other hand, <a href="http://www.joseki.org/">Joseki</a> can do the /sparql/ thing, but also provides an option to access datasets through the path, and <a href="http://www.openrdf.org/">Sesame</a> can have several repositories, each of which is accessible by varying the path in the URL.<br /><br />The base URL for issuing SPARQL queries against would be easy enough to specify, but it introduces a new concept into the query language, and that has deeper ramifications than should be broached in this context.<br /><br />The query that can be issued against an endpoint should look like a standard query, and not just a CONSTRUCT, as this provides greater flexibility and also binds the columns to variable names that can appear in other parts of the query. This is basically identical to a subquery, which is exactly what we want.<br /><br /><h3>Bandwidth Efficiency</h3> The last 2 items are a matter of efficiency and not correctness. However, they can mean the difference between transferring a few bytes vs a few megabytes over a network.<br /><br />(BTW, when did "bandwidth" get subverted to describe data rates? When I was a boy this referred to the range of frequencies that a signal used, and this had a mathematical formula relating it to the number of symbols-per-second that could be transmitted over that signal - which does indeed translate to a data rate. However, it now gets used in completely different contexts which have nothing to do with frequency range. Oh well.. back to the story).<br /><br />If I want to ask for the identifiers of people named "Fred" (as opposed to something else I want to name with <code>foaf:givenname</code>), then I could use the query:<pre><code>PREFIX foaf: <http://xmlns.com/foaf/0.1/><br />SELECT ?person WHERE {<br /> ?person foaf:givenname "Fred" .<br /> ?person a foaf:Person<br />}</code></pre>Now what if the "type" data and the "name" data appear on different servers? In that case we would use a distributed query.<br /><br />Using the HTTP/GET idiom I mentioned at the top of this post, then I could send the query to the server containing the <code>foaf:givenname</code> statements, and change it now to say:<pre><code>PREFIX foaf: <http://xmlns.com/foaf/0.1/><br />SELECT ?person WHERE {<br /> ?person foaf:givenname "Fred" .<br /> GRAPH <http://hostname/sparql/?query=<br />PREFIX+foaf%3A+%3Chttp%3A%2F%2Fxmlns.com%2Ffoaf%2F0.1%2F%3E%0A<br />CONSTRUCT+%7B%3Fperson+a+foaf%3APerson%7D+<br />WHERE+%7B%3Fperson+a+foaf%3APerson%7D> {<br /> ?person a foaf:Person<br /> }<br />}</code></pre>So now the server will resolver all the entities with the name "Fred", then it will retrieve a graph and ask it for all the entities that are a <code>foaf:Person</code>. Then it will join these results to create the final result.<br /><br />But what happens if there are only 3 things named "Fred", but 10,000 people in the data set? In that case the server will resolve the first pattern, getting 3 bindings for ?person, and then make a request across the network, getting back 10,000 statement which are then queried for those statements the describe a <code>foaf:Person</code> (they all will), and only then does the join happen. Ideally, we'd have gone the other way, and asked the server with 10,000 people to request data from the server that had 3 entities named Fred, but we may not have known ahead of time that this would be better, and a more complex query could require a more complex access pattern than simply "reversing" the resolution order.<br /><br />First of all, we need a way to ask each server how large a set of results is likely to be. The <a href="http://www.w3.org/2009/sparql/wiki/Feature:AggregateFunctions">COUNT</a> function that is being discussed in the SPARQL <abbr title="Working Group">WG</abbr> at the moment could certainly be used to help here, though for the sake of efficiency it would also be nice to have a mechanism for asking for the upper-limit of the COUNT. That isn't appropriate for a query language (since it refers to database internals) but would be nice to have in the protocol, such as with an HTTP/OPTION request (though I <em>really</em> don't see something like that being ratified by the SPARQL WG). But even without an "upper limit" option, a normal COUNT would give us what we need to find out how to move the query around.<br /><br />So once we realize that the server running the query has only a little data (call it "Server A"), and it needs to join it to a large amount of data on a different server (call this one "Server B", then of course we want Server A to send the small amount of data to Server B instead of retrieving the large amount from it. One way to do this might be to invert the query at this point, and send the whole thing to Server B. That server then asks Server A for the data, and sends its response. Unfortunately, that is both complex, and requires a lot more hops than we want. The final chain here would be:<ol><li>Client sends query as a request to Server A</li><li>Server A reverses the query and sends the new query as a request to Server B</li><li>Server B resolves its local data, and sends the remainder of the query as a request to Server A</li><li>Server A responds to Server B with the result of entities with the name "Fred"</li></li>Server B joins the data it got with the local data and responds to Server A with the results of the entire query</li><li>Server A responds to the client with the unmodified results it just received</li></ol><br />Yuck.<br /><br />Instead, when Server A detects a data size disparity like this, it needs a mechanism to package up its bindings for the <em>?person</em> variable, and send these to Server B along with the request. Fortunately, we already have a format for this in the <a href="http://www.w3.org/TR/rdf-sparql-XMLres/">SPARQL result set format</a>.<br /><br />Normally, a query would be performed using an HTTP/GET, but including a content-body in a GET request has never been formally recognized (though it has not been made illegal), so I don't want to go that way. Instead, a POST would work just as well here. The HTTP request with content could look like this (I've added line breaks to the request):<code><pre>POST /sparql/?query=<br />PREFIX+foaf%3A+%3Chttp%3A%2F%2Fxmlns.com%2Ffoaf%2F0.1%2F%3E%0A<br />SELECT+%3Fperson+WHERE+%7B%3Fperson+a+foaf%3APerson%7D HTTP/1.1<br />Host: www.example<br />User-agent: my-sparql-client/0.1<br />Content-Type: application/sparql-results+xml<br />Content-Length: xxx<br /><br /><?xml version="1.0"?><br /><sparql xmlns="http://www.w3.org/2005/sparql-results#"><br /> <head><br /> <variable name="person"/><br /> </head><br /> <results distinct="false" ordered="false"><br /> <result><br /> <binding name="person"><uri>http://www.example/FredFlintstone</uri></binding><br /> </result><br /> <result><br /> <binding name="person"><uri>http://www.example/FredKruger</uri></binding><br /> </result><br /> <result><br /> <binding name="person"><uri>http://www.example/FredTheDog</uri></binding><br /> </result><br /> </results><br /></sparql></code></pre><br />I can't imagine that I could be successful in suggesting this as part of the underlying protocol for federated querying, but I'm thinking that I'll be incorporating it into Mulgara all the same.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com4tag:blogger.com,1999:blog-6848574.post-13658264578130772882009-02-27T12:37:00.002-06:002009-02-27T21:16:15.411-06:00<h3>More Programmable Logic</h3> In the last post I gave a basic description of how the Krule rules engine works. I left out a number of details, but it provides some details on the overall approach.<br /><br />The most important detail I skipped was the interaction with resolvers in order to identify resources with particular properties. This includes among other things, finding URIs in a domain, inequality comparisons for numeric literals, regular expression matching on strings, the "type" of a resource (URI, blank node, or literal), and optimized transitivity of predicates. We also manage "subtractions" in our data, in the same way that Evren Sirin <a href="http://clarkparsia.com/weblog/2009/02/11/integrity-constraints-for-owl/">described NOT</a> two weeks ago. I should point out that Mulgara does subtractions directly, and not with a combined operation with the OPTIONAL/FILTER/!BOUND pattern. This was introduced this in 2004 (or was it 2005?). I have to say that I'm a little surprised that SPARQL never included anything to express it directly, particularly since so many people ask for it (and hence, the popularization of OPTIONAL/FILTER/!BOUND as a pattern) and because the <a href="p://www.w3.org/TR/rdf-sparql-query/#sparqlAlgebra">SPARQL algebra</a> provides a definition of a function called "Diff" that is used internally.<br /><br />Anyway, these extensions are not necessary to understand Krule or RLog, but I think it's useful to know that they're there.<br /><br />So now that I've described Krule, I've set the scene for describing RLog.<br /><h3>Krule Configuration</h3> When I first wrote <abbrev title="Kowari Rules">Krule</abbrev>, I was ultimately aiming at OWL, but I had a short term goal of RDFS. I find that I have to take these things one step at a time, or else I never make progress. Since I knew that my rules were going to expand, I figured I should not hard code anything into Mulgara, but that I should instead interpret a data structure which described the rules. That also meant I would be able to run rules for lots of systems: RDFS, SKOS, OWL, or anything else. Of course, some things would need more features than RDFS needed (e.g. both OWL and SKOS need "lists"), but my plan was to work on that iteratively.<br /><br />At the time, I designed an <a href="http://mulgara.org/files/misc/krule.rdf">RDF schema</a> to describe my rules, and built the Krule engine to initialize itself from this. This works well, since the whole system is built around RDF already. I also created a new TQL command for applying rules to data:<pre><code> <strong>apply</strong> <<em>rule_graph_uri</em>> <strong>to</strong> <<em>data_graph_uri</em>> [<<em>output_graph_uri</em>>]</code></pre>By default all of the entailed data goes into the graph the rules are being applied to, but by including the optional output graph you can send the entailed data there instead.<br /><br />This worked as planned, and I was able to build a <a href="http://mulgara.org/files/misc/rdfs-krule.rdf">Krule configuration graph for RDFS</a>. Then life and work interfered and the rules engine was put on the back burner before I got to add some of the required features (like consistency checking).<br /><br />Then about 18 months ago I thought I'd have a go at writing OWL entailment, at least for that part that the rules engine would support. So I set out to write a new Krule file. The complexity of the file was such that I started writing out the rules that I wanted using a kind of Prolog notation with second order programming, in a very similar way to how Raphael Volz represented the same constructs in some of <a href="http://www.daml.org/listarchive/joint-committee/att-1254/01-bubo.pdf">his</a> <a href="http://lists.w3.org/Archives/Public/www-webont-wg/2002Oct/att-0033/Paper.pdf">papers</a>. This grammar uses binary predicates to represent genereal triples, and unary predicates to indicate "type" statements, ie. statements with a predicate of <em>rdf:type</em>. As an example, the <code>owl:sameAs</code> predicate indicates that if the subject of a statement is the <code>owl:sameAs</code> another resource, then that statement can be duplicated with the other resource as the subject. This was easily expressed this as:<pre><code> A(Y,Z) :- A(X,Z), owl:sameAs(X,Y).</code></pre>I wrote out about 3 rules before I realized that converting these to Krule was going to be tedious and prone to error. In fact, I had unthinkingly demonstrated that I already had a language I wanted to use, and the process of translation was an easily automated task. Since the language was allowing me to describe RDF with logic, I decided to call it RLog (for RDF Logic).<br /><br /><h3>Iterations</h3> Andrae and I had been discussing how much we disliked <a href="http://sablecc.org/">SableCC</a> for generating the <abbrev title="Tucana Query Language">TQL</abbrev> parser for Mulgara, and so I started looking around at other parsers. The power of <a href="http://en.wikipedia.org/wiki/LALR_parser">LALR parsers</a> appealed to me, and so I went with <a href="http://beaver.sourceforge.net/">Beaver</a>. Along with the <a href="http://jflex.de/">JFlex</a> lexer, this software is a pleasure to use. I had learned how to use them both, <em>and</em> created the RLog grammar in about an hour. I then converted the <a href="http://mulgara.org/files/misc/rdfs-krule.rdf">Krule configuration for RDFS</a> into this new grammar, and convinced myself that I had it right. Then life got in the way again, and I put it away.<br /><br />Last year while waiting for some tests to complete, I remembered this grammar, and spent some of my enforced downtime making it output some useful RDF in the Krule schema. For anyone who's looked at Krule, they may have noticed that triggers for rules (which rules cause which other rules to be run) are explicitly encoded into the configuration. I did this partly because I already had the list of trigger dependencies for RDFS rules, and partly because I thought it would offer more flexibility. However, I had realized some time before that these dependencies were easy to work out, and had been wanting to automate this. I decided that RLog was the perfect place to do it, partly because it meant not having to change much, but also because it still allowed me the flexibility of tweaking the configuration.<br /><br />Once I'd finished writing a system that could output Krule, I tested it against my <a href="http://mulgara.org/files/misc/rdfs.dl">RDFS RLog file</a>, and compared the generated Krule to the original configuration. Initially I was disappointed to see to many dependencies, but on closer inspection I realized that they were all valid. The original dependencies were a reduced set because they applied some of the semantics of the predicates and classes they were describing, which was not something that a grammar at the level of RLog could deal with. Semi-naïve evaluation was going to stop unnecessary rules from running anyway, so I decided that these extra triggers were fine. I ran it against the various test graphs that I had, and was pleased to see that it all worked perfectly.<br /><br />But once again, work and life got in the way, and I put it aside again.<br /><br /><h3>SKOS</h3> A couple of months ago Brian asked me about running rules for generating <a href="http://www.w3.org/TR/skos-reference/">SKOS</a> entailments, as he was writing <a href="http://www.devx.com/semantic/Article/39348/1954">a paper</a> on this topic. I pointed him towards RLog and knocked up a couple of useful rules for him. However, as I got into it, I realized that I could actually do most of SKOS quite easily, and before I knew it, I'd written an entire <a href="http://mulgara.org/trac/attachment/wiki/SKOS/skos.rlog">RLog program for it</a>. The only thing I could not do was "<a href="http://www.w3.org/TR/skos-reference/#L3312">S35</a>", as this requires a predicate for list membership (also a requirement for OWL, and on my TODO list).<br /><br />The <em>really</em> interesting thing about this document, is that almost everything is an axiom and not a rule. It only requires 2 RDFS rules and 5 OWL rules to make the whole thing work. This is quite important, as the complexity in running the rules is generally exponential in the number of rules.<br /><br />This is (<a href="http://en.wiktionary.org/wiki/IMNSHO">IMNSHO</a>) the power of ontologies. By providing properties of classes and properties, they reduce the need for many rules. To demonstrate what I mean, I've seen a few systems (such as <a href="http://www.dbai.tuwien.ac.at/proj/dlv/">DLV</a>) which define a predicate to be transitive in the following way:<pre><code> pred(A,C) :- pred(A,B), pred(B,C).</code></pre>This works, but it creates a new rule to do it. Every new transitive predicate also gets its own rule. As I have already said, this has a significant detrimental effect on complexity.<br /><br />Conversely, models such as OWL are able to declare properties as "transitive". Each such declaration then becomes a statement rather than a rule. Indeed, all the transitive statements get covered with a single second-order rule:<pre><code> P(A,C) :- P(A,B), P(B,C), owl:TransitivePredicate(P).</code></pre>"Second-order" refers to the fact that variables can be used for the predicates (such as the variable <em>P</em> in the expression <em>P(A,B)</em>), and that predicates can appear as parameters for other predicates, such as <em>owl:TransitivePredicate(...)</em>. The symmetry of Mulgara indexes for RDF statements allows such second order constructs to be evaluated trivially.<br /><br />Using the OWL construct for transitivity, any number of predicates can be declared as transitive with no increase to the number of rules. The complexity of rules does have a component derived from the number of statements, but this is closer to linear or polynomial (depending on the specific structure of the rules), and is therefore far less significant for large systems. It is also worth noting that several OWL constructs do not need an exhaustive set of their own rules, as their properties can be described using other OWL constructs. For instance, <em>owl:sameAs</em> is declared as being <em>owl:SymmetricProperty</em>. This means that the entailment rule for <em>owl:sameAs</em> (shown above) need only be written once for <em>owl:sameAs(A,B)</em> and is not needed for symmetric case of <em>owl:sameAs(B,A)</em>.<br /><br /><h3>General Acceptance</h3> Brian wasn't the only one to like RLog. I've had reports and feature requests from a few other people who like using it as well. The most commonly requested feature has been the generation of blank nodes. The main reason for this is to handle existential formula, which makes me wary, as this can lead to infinite loops if not carefully controlled. On the other hand, I <em>can</em> see the usefulness of it, so I expect to implement it eventually.<br /><br />A related feature is to create multiple statements based on a single matched rule. This can usually be handled by introducing a new rule with the same body and a different head, but if a blank node has been generated by the rule, then there needs to be some way to re-use it in the same context.<br /><br />A problem with general usage is that the domains understood by RLog have been preset with the domains that I've wanted, namely: <a href="http://www.w3.org/1999/02/22-rdf-syntax-ns#">RDF</a>, <a href="http://www.w3.org/2000/01/rdf-schema#">RDFS</a>, <a href="http://www.w3.org/2002/07/owl#">OWL</a>, <a href="http://www.w3.org/2001/XMLSchema#">XSD</a>, <a href="http://mulgara.org/mulgara#">MULGARA</a>, <a href="http://mulgara.org/owl/krule/#">KRULE</a>, <a href="http://xmlns.com/foaf/0.1/">FOAF</a>, <a href="http://www.w3.org/2004/02/skos/core#">SKOS</a>, and <a href="http://purl.org/dc/elements/1.1/">DC</a>. The fix to this can be isolated in the parser, so I anticipate this being fixed by Monday. :-)<br /><br />Despite it being limited, RLog was proving to be useful, allowing me to encode systems like SKOS very easily. However, being a separate program that translated an RLog file into Krule configuration files that <em>then</em> had to be loaded and applied to data, was a serious impediment to the usage.<br /><br /><h3>Integration</h3> The Mulgara "Content Handler" interface is a mechanism for loading any kind of data as triples, and optionally writing it back out. The two main ones are the <a href="http://www.w3.org/TR/rdf-syntax-grammar/">RDF/XML</a> handler and the <a href="http://www.w3.org/DesignIssues/Notation3.html">N3</a> handler, but there are others in the default distribution as well. There is a MBox handler for representing Unix Mailbox files as RDF, and an <abbrev title="Moving Pictures Expert Group-1, Audio Layer 3">MP3</abbrev> handler which maps ID3 metadata from MP3 files. These handlers compliment the "Resolver" interface which represents external data sources as a dynamic graph.<br /><br />Since RLog has a well-defined mapping into RDF (something the RLog program was already doing when it emitted RDF/XML) then reimplementing this system as a content handler would integrate it into Mulgara with minimal effort. I had been planning on this for some time, but there always seemed to be more pressing priorities. These other priorities are still there (and still pressing!) but a few people (e.g. <a href="http://prototypo.blogspot.com/2009/02/desperately-seeking-skos-vendors.html">David</a>) have been pushing me for it recently, so I decided to bite the bullet and get it done.<br /><br />The first problem was that the parser was in Beaver. This is yet another <abbrev title="Java ARchive">JAR</abbrev> file to include at a time when I'm trying to cut down on our plethora of libraries. It also seemed excessive, since we already have both JavaCC <em>and</em> SableCC in our system - the former for SPARQL, the latter for TQL, and I hope to redo TQL in JavaCC eventually anyway. So I decided to re-implement the grammar parser in JavaCC.<br /><br />Unfortunately, it's been over a year since I looked at JavaCC, and I was very rusty. So my first few hours were spent relearning token lookahead, and various aspects of JavaCC grammar files. I actually think I know it better now than I did when I first did the SPARQL parser (that's a concern). There are a few parts of the grammar which are not LL(1) either, which forced me to think through the structure more carefully, and I think I benefited from the effort.<br /><br />I was concerned that I would need to reimplement a lot of the AST for RLog, but fortunately this was not the case. Once I got a handle on the translation it all went pretty smoothly, and the JavaCC parser was working identically to the original Beaver parser by the end of the first day.<br /><br />After the parser was under control I moved on to emitting triples. This was when I was reminded that writing RDF/XML can actually be a lot easier than writing raw triples. I ended up making slow progress, but I finally got it done last night.<br /><br /><h3>Testing</h3> Before running a program through the content handler for the first time, I wanted to see that the data looked as I expected it to. <a href="http://www.junit.org/">JUnit</a> tests are going to take some time to write, and with so much framework around the rules configuration, it was going to be clear very quickly if things weren't <em>just</em> right. I considered running it all through the debugger, but that was going to drown me in a sea of RDF nodes. That was when I decided that I could make the resolver/content-handler interfaces work for me.<br /><br />Mulgara can't usually tell the difference between data in its own storage, and data sourced from elsewhere. It always opts for internal storage first, but if a graph URI is not found, then it will ask the various handlers if they know what to do with the graph. By using a <strong>file:</strong> URL for the graphs, I could make Mulgara do all of it's reading and writing to files, using the content handlers to do the I/O. In this case, I decided to "export" my RLog graph to an N3 file, and compare the result to an original Krule RDF/XML file that I exported to another N3 file.<br /><br />The TQL command for this was:<pre><code> export <file:/path/rdfs.rlog> to <file:/path/rlog-output.n3></code></pre>Similarly, for the RDF/XML file was transformed to N3 with:<pre><code> export <file:/path/rdfs-krule.rdf> to <file:/path/krule-output.n3></code></pre>I love it when I can glue arbitrary things together and it all "just works". (This may explain why I like the Semantic Web).<br /><br />My first test run demonstrated that I was allowing an extra # into my URIs, and then I discovered that I'd fiddled with the literal token parsing, and was now including quotes in my strings (oops). These were trivial fixes. The third time through was the charm. I spent some time sorting my N3 files before deciding it looked practically identical, and so off I went to run an RLog program directly.<br /><br />As I mentioned in my last post, applying a set of rules to data is done with the <strong>apply</strong> command. While I could have loaded the rules into an internal graph (pre-compiling them, so to speak) I was keen to "run" my program straight from the source:<pre><code> apply <file:/path/rdfs.rlog> to <test:data:uri></code></pre>...and whaddaya know? It worked. :-)<br /><br />Now I have a long list of features to add, optimizations to make, bugs to fix, and all while trying to stay on top of the other unrelated parts of the system. Possibly even more importantly, I need to document how to write an RLog file! But for the moment I'm pretty happy about it, and I'm going to take it easy for the weekend. See you all on Monday!Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0tag:blogger.com,1999:blog-6848574.post-25237618229839832252009-02-27T10:46:00.003-06:002009-02-27T12:37:03.960-06:00<h3>Programmable Logic</h3> I had hoped to blog a lot this week, but I kept putting it off in order to get some actual work done. I still have a lot more to do, but I'm at a natural break, so I thought I'd write about it.<br /><br />I have finally integrated RLog into Mulgara! In some senses this was not a big deal, so it took a surprising amount of work.<br /><br />To explain what RLog is, I should describe Krule first.<br /><br /><h3>Krule</h3> The Mulgara rules engine, <abbrev title="Kowari Rules">Krule</abbrev>, implements my design for executing rules on large data sets. For those familiar with rules engines, it is similar to <a href="http://en.wikipedia.org/wiki/Rete_algorithm">RETE</a>, but it runs over data as a batch, rather than iteratively. It was designed this way because our requirements were to perform inferencing on gigabytes of RDF. This was taking a long time to load, so we wanted to load it all up, and <em>then</em> do the inferencing.<br /><br />RETE operates by setting up a network describing the rules, and then as statements are fed through it, each node in that network builds up a memory of the data it has seen and passed. As data comes in the first time, the nodes that care about it will remember this data and pass it on, but subsequent times through, the node will recognize the data and prevent it from going any further. In this way, every statement fed into the system will be processed as much as it needs to be, and no more. There is even a proof out there that says that RETE is the optimal rules <em>algorithm</em>. Note that the implementation of an algorithm can, and does, have numerous permutations which can allow for greater efficiency in some circumstances, so RETE is often treated as the basis for efficient engines.<br /><br />One variation on algorithms of this sort is to trade <em>time</em> for <em>space</em>. This is a fancy way of saying that if we use more memory then we can use less processing time, or we can use more processing time to save on memory. Several variations on RETE do just this, and so does Krule.<br /><br />When looking at the kinds of rules that can be run on RDF data, I noticed that the simple structure of RDF meant that each "node" in a RETE network corresponds to a constraint on a triple (or a <abbrev title="Basic Graph Pattern">BGP</abbrev> in <a href="http://www.w3.org/TR/rdf-sparql-query/">SPARQL</a>). Because Mulgara is indexed in "every direction", this means that every constraint can be found as a "slice" out of one of the indexes (while our current system usually takes <em>O(log(n))</em> to find, upcoming systems can do some or all of these searches in <em>O(1)</em>). Consequently, instead of my rule network keeping a table in memory associated with every node, there is a section out of an index which exactly corresponds to this table.<br /><br />There are several advantages to this. First, the existence of the data in the database is defined by it being in the indexes. This means that all the data gets indexed, rules engine or not. Second, when the rules engine is run, there is no need to use the data to iteratively populate the tables for each node, as the index slices (or <strong>constraint resolutions</strong>) are <em>already</em> fully populated, by definition. Finally, our query engine caches constraint resolutions, and they do not have to be re-resolved if no data has gone in that can affect them (well, some of the caching heuristics can be improved for better coverage, but the potential is there). This means that the "tables" associated with each node will be automatically updated for us as the index is updated, and the work needed to handle updates is minimal.<br /><br />During the first run of Krule, none of the potential entailments have been made yet, so everything is potentially relevant. However, during subsequent iterations of the rules, Krule has no knowledge of which statements are new in the table on any given node. This means it will produce entailed statements that already exist, and are duplicates. Inserting these is unnecessary (and hence, suboptimal) and creates unwanted duplicates. We handle this in two ways.<br /><br />The first and simpler mechanism is that Mulgara uses <a href="http://en.wikipedia.org/wiki/Set_(mathematics)">Set semantics</a>, meaning that any duplicates are silently (and efficiently) ignored. Set semantics are important when dealing with RDF, and this is why I'm so frustrated at non-distinct nature of SPARQL queries... but that's a discussion for another time. :-)<br /><br />The more important mechanism for duplicate inserts is based on RDF having a property of being monotonically increasing. This is because RDF lets you assert data, but not to "unassert" it. OWL 2 has introduced explicit denial of statements, but this is useful for preventing entailments and consistency checking... it does not remove previously existing statements. In non-monotonic systems a constraint resolution may keep the same size if some statements are deleted while an equal number of statements are inserted, but in a monotonic system like RDF, keeping the same size means that there has been no change. So a node knows to pass its data on if the size of its table increases, but otherwise it will do nothing. I stumbled across this technique as an obvious optimization, but I've since learned that it has a formal name: <em>semi-naïve evaluation</em>.<br /><br /><h3>Krule Extensions</h3> While this covers the "batch" use-case, what about the "iterative" use-case, where the user wants to perform inferences on streams of data as it is inserted into an existing database? In this case, the batch approach is too heavyweight, as it will infer statements that are almost entirely pre-existing. We might handle the non-insertion of these statements pretty well, but if you do the work again and again for every statement you try to insert, then it will add up. In this case, the iterative approach of standard RETE is more appropriate.<br /><br />Unfortunately, RETE needs to build its tables up by iterating over the entire data set, but I've already indicated that this is expensive for the size of set that may be encountered. However, the Krule approach of using constraint resolutions as the tables is perfect for pre-populating these tables in a standard RETE engine. I mentioned this to Alex a few months ago, and he pointed out that he did exactly the same thing once before when implementing RETE in <abbrev title="Tucana Knowledge Store">TKS</abbrev>.<br /><br />I haven't actually done this extension, but I thought I'd let people know that we haven't forgotten it, and it's in the works. It will be based on Krule configurations, so a lot of existing work will be reused.<br /><br />I don't want to overdo it in one post, so I'll write about RLog in the next one.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com1tag:blogger.com,1999:blog-6848574.post-75135713167501218182009-02-19T00:34:00.002-06:002009-02-19T00:47:08.730-06:00<h3>Is it Me?</h3> After struggling against nonsensical errors all day, I finally realized that either Eclipse was going mad, or I was. In frustration I finally updated a method to look like this:<pre><code> public void setContextOwner(ContextOwner owner) {<br /> contextOwner = owner;<br /> if (owner != contextOwner) throw new AssertionError("VM causing problems");<br /> }</code></pre>This code throws an AssertionError, and yes, there is only 1 thread. If it were multithreaded at least I'd have a starting point. The problem only appears while debugging in Eclipse.<br /><br />I'm not sure whether I'm happy to have isolated my problem down to a single point, or to be unhappy at something that is breaking everything I am trying to work on. I guess I'm unhappy, because it's kept me up much later than I'd hoped.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com2tag:blogger.com,1999:blog-6848574.post-67737943790738218932009-02-17T00:48:00.003-06:002009-02-17T02:16:08.381-06:00<h3>Resting</h3> I've had a couple of drinks this evening. Lets see if that makes me more or less intelligible.<br /><br />Now that I've added a <abbr title="REpresentational State Transfer">REST</abbr>-like interface to Mulgara, I've found that I've been using it more and more. This is fine, but to modify data I've had to either upload a document (a very crude tool) or issue write commands on the <abbr title="Tucana Query Language">TQL</abbr> endpoint. Neither of these were very RESTful, and so I started wondering if it would make sense to do something more direct.<br /><br />From my perspective (and I'm sure there will be some who disagree with me), the basic resources in an RDF database are graphs and statements. Sure, the URIs themselves are resources, but the perspective of RDF is that these resources are infinite. Graphs are a description of how a subset of the set of all resources are related. Of course, these relationships are described via statements.<br /><br /><h3>Graphs and Statements</h3> So I had to work out how to represent a graph in a RESTful way. Unfortunately, graphs are already their own URI, and this probably has nothing to do with the server it is on. However, REST requires a URL which identifies the host and service, and then the resource within it. So the graph URI has to be embedded in the URL, after the host. While REST URLs typically try to reflect structure in a path, encoding a URL makes this almost impossible. Instead I opted to encode the graph URI as a "graph" parameter.<br /><br />Statements posed a similar though more complex challenge. I still needed the graph, so this had to stay. Similarly, the other resources also needed to be encoded as parameters, so I added this as well. This left me with 2 issues: blank nodes and literals.<br /><br /><h3>Literals</h3> Literals were reasonably easy... sort of. I simply decided that anything that didn't look like a URI would be a literal. Furthermore, if it was structured like a SPARQL literal, then this would be parsed, allowing a datatype or language to be included. However, nothing is never <em>really</em> easy (of course) and I found myself wondering about relative URIs. These had never been allowed in Mulgara before, but I've brought them in recently after several requests. Most people will ignore them, but for those people who have a use, they can be handy. That all seems OK, until you realize that the single quote character " is an <em>unreserved</em> character in URIs, and so the apparent literal <em>"foo"</em> is actually a valid relative URI. (Thank goodness for unit tests, or I would never have realized that). In the end, I decided to treat any valid URI as a URI and not a literal, <em>unless</em> it starts with a quote. If you really want a relative URI of <em>"foo"</em> then you'll have to choose another interface.<br /><br /><h3>Blank Nodes</h3> Blank nodes presented another problem. Initially, I decided that any missing parameter would be a blank node. That worked well, but then I started wondering about using the same blank node in more than one statement. I'm treating statements as resources, and you can't put more than one "resource" into a REST URL, so that would mean referring to the same "nameless" thing in two different method calls, which isn't possible. Also, adding statements with a blank node necessarily creates a new blank node every time, which breaks idempotency.<br /><br />Then what about deletion? Does nothing match, or does the blank node match everything? But doing matches like that means I'm no longer matching a single statement, which was what I was trying to do to make this REST and not RPC for a query-like command.<br /><br />Another option is to refer to blanks with a syntax like <code>_:123</code>. However, this has all of the same problems we've had with exactly this idea in the query language. For instance, these identifiers are not guaranteed to match between different copies of the same data. Also, introducing new data that includes the same ID will accidentally merge these nodes incorrectly. There are other reasons as well. Essentially, you are using a name for something that was supposed to be nameless, and because you're not using URIs (like named things are supposed to use) then you're going to encounter problems. URIs were created for a reason. If you need to refer to something in a persistent way, then use a name for it. (Alternatively, use a query that links a blank node through a functional/inverse-functional predicate to uniquely identify it, but that's another discussion).<br /><br />So in the end I realized that I can't refer to blank nodes at all in this way. But I think that's OK. There are other interfaces available if you need to work with blank nodes, and <a href="http://www.talis.com/platform/">some applications</a> prohibit them anyway.<br /><br /><h3>Reification</h3> Something I wanted to come back to is this notion of representing a statement as 3 parameters in a URL (actually 4, since the graph is needed). The notion of representing a statement as a URI has already been addressed in reification, however I dismissed this as a solution here since reifying a statement does not imply that statement exists (indeed, the purpose of the reification may be to say that the statement is false). All the same, it's left me thinking that I should consider a way to use this interface to reify statements.<br /><br /><h3>Methods</h3> So the methods as they stand now are:<br /><table border="1"><tr><th>method/ resource</th><th>Graph</th><th>Statement</th><th>Other</th></tr><tr><th>GET</th><td align="center">N/A</td><td align="center">N/A</td><td align="center">Used for queries.</td></tr><tr><th>POST</th><td align="center">Upload graphs</td><td align="center">N/A</td><td align="center">Write commands (not SPARQL)</td></tr><tr><th>PUT</th><td align="center">Creates graph</td><td align="center">Creates statement</td><td align="center">N/A</td></tr><tr><th>DELETE</th><td align="center">Deletes graph</td><td align="center">Deletes statement</td><td align="center">N/A</td></tr></table><br />I haven't done HEAD yet (I intend to indicate if a graph or statement exists), and I'm ignoring OPTION.<br /><br />I've also considered what it might mean to GET a statement or a graph. When applied to a graph, I could treat this as a synonym for the query:<pre><code> construct {?s ?p ?o} where {?s ?p ?o}</code></pre>Initially I didn't think it made much sense to GET a statement, but while writing this it occurs to me that I could return a reification URI, if one exists (this is also an option for HEAD, but I think <em>existence</em> is a better function there).<br /><br /><h3>Is There a Point?</h3> Everything I've discussed here may seem pointless, especially since there are alternatives, none of it is standard, and I'm sure there will be numerous criticisms on my choices. On the other hand, I wrote this because I found that uploading documents at a time to be too crude for real coding. I also find that constructing TQL command to modify data to be a little too convoluted in many circumstances, and that a simple PUT is much more appropriate.<br /><br />So, I'm pretty happy with it, for the simple fact that <em>I</em> find it useful. If anyone has suggested modifications or features, than I'll be more than happy to take them on board.Paulahttp://www.blogger.com/profile/03653112583629043593noreply@blogger.com0