PostsI've noticed that my new blog has slightly different options than my old one. For instance, the new blog has a "title" that I can fill in for every post, but this blog doesn't. I wonder why the difference, and if it's possible to upgrade?
Also, I've been using <h3> tags in the new blog, to match the template I'm using there. I notice that <strong> doesn't do a lot in the current stylesheet for this blog, so I decided to trial using <h3> here as well.
The natural approach to mapping a file is to simply map the entire file from beginning to end. Any accesses to this file become simple
put()operations, which in turn get passed into direct memory accesses.
In Java this suffers from 2 major problems. The first is the already mentioned issue of the 2GB limit, even for 64 bit operating systems.
The second problem is the inherent difficulties in remapping the file when it grows, since it is impossible to guarantee unmapping of a file. A new mapping plus the old one may have a combined total that is larger than the available address space, so it is impossible to have both in existence at once. Another issue, is that remapping a file that has grown is a very expensive operation, particularly when the garbage collector will be repeatedly run in an attempt to clean up older file mappings.
A way to avoid all of these issues is to break the mapping into a series of smaller consecutive mappings:
- Even though the 2GB limit exists for a single mapping, a 64 bit system may map up to hundred of GB of files.
- Expanding a file just means a new mapping goes onto the end of the file. For this reason, files get expanded in "chunks". This means that the old mappings hang around and don't conflict with the new mapping, and not crowding out memory.
- Not cleaning up the old mappings means that there is no overhead in cleaning up the mappings, particularly the expensive garbage collecting.
MappedByteBuffercalled (funnily enough)
mappedByteBuffers. Enabling access to the same data in
longs is a pair of associated arrays labeled
The use of these arrays of buffers makes everything more complex in this file, but not excessively so. The constructor has to calculate the number of buffers needed to cover the entire file, and sets up the arrays to hold them. It then passes the work on to a method called
mapFile(). This method loops through the array of buffers, and for each element it sets up the mappings to the appropriate areas of the file. This method is also called when a file is expanded, in which case it copies all the existing mappings from the original array, and appends the new mappings to the end of the new array. Any failed attempts at mapping will result in the garbage collector being run, and the mapping attempted once more. This process is in a loop that will run 10 times (yes, that's a magic number that seems to work well. It should probably be configurable).
A mapped file can be expanded or contracted by calling
setSize(). There are currently no use cases to shrink a file, except for cases where the file is shrunk to zero size (with
clear()), so smaller sizes than the current size are ignored. Once the size is determined,
mapFile()is called again to update the mapping arrays and create the new mappings.
Each of the
putmethods first need to determine the correct mapping to use (by taking the absolute offset divided by the size of the mappings) and the offset into the mapping (by taking the absolute offset mod the size of the mappings). Does this remind any Intel "protected mode" programmers of selectors and offsets? :-)
Once the correct offset within the specified mapping is found, then the access goes ahead as normal.
Some of this may sound like significant overhead, but it is not. The majority of the extra work is simple integer arithmetic, which will be performed in parallel to other operations on most modern processors.
The result is a
MappedIntFilewhich can be expanded while mapping a large portion of memory, can handle more than 2GB, and is as fast or faster on all operations than a single large mapping would have been.