I was fortunate enough to attend the first day of Sun’s Kernel Conference Australia 2009 today (I was a last-minute substitute for a colleague who couldn’t make it).
The conference opened with a keynote by Jeff Bonwick and Bill Moore, Sun’s masterminds behind ZFS, in which they ran through a big list of the cool stuff that has been happening with ZFS lately. I thought their talk was well presented, and it definitely left me with a renewed desire to play with ZFS (if only FreeBSD or OpenSolaris would run on my hardware at home ..).
Most interesting for me was a new feature they described, called the L2ARC (Level 2 Adaptive Replacement Cache), which allows expensive but speedy flash memory to be used as a kind of midway point between cache in core memory and the much slower underlying magnetic disk devices. When cache entries are evicted from the memory cache, they can be moved to the flash device (which provides much faster seek times and throughput than a spinning disk, but with a smaller capacity).
Jeff and Bill presented some very impressive benchmark results for this new work. They compared two similarly-priced configurations: one was running a bunch of fast SAS disks, but the other was configured with a high-reliability SSD as a write cache, a less reliable but much larger SSD used for L2ARC, and a bunch of low-power, high-capacity hard disks. Their results showed the latter outperforming the former in all respects, most notably in power consumption (because it eliminates the high-RPM hard disks). I wish I could find their benchmark results online somewhere, because the numbers were impressive. This kind of configuration seems like it really would be a viable improvement for high-density storage.
The presentation by Pawel Dawidek, a FreeBSD committer, gave a run-down of GEOM(4) in FreeBSD. He went a bit overboard with the Keynote transitions, but otherwise his presentation was very accessible. Certain individuals have told me about GEOM and its advantages in the past, but when Pawel ran through the essentials of how the GEOM abstractions are put together I realised that they have a model which is very flexible, yet simple, and makes a lot of sense. It made me wish that dm and md and all their bits and pieces on Linux would fit together as nicely and cleanly as GEOM does.
The panel discussion in the afternoon with Jeff, Bill, and Pawel was also fascinating. It was much less technical than their presentations, and was more focused on some of the challenges they’re facing with ZFS and the directions they see it going in future.
The OpenBSD presentation (originally by Henning Brauer, but he was replaced by UQ’s David Gwynne at the last minute) described some performance improvements that have been made to the OpenBSD network stack and pf (packet filter). I thought David’s presentation style was fun and informative; in spite of knowing nothing about the implementation of OpenBSD, or even of network stacks in general, I still managed to take away a lot of interesting (if OpenBSD-specific) knowledge from the presentation.
The quality of the other three presentations was disappointing. I spent most of their time wishing I’d brought along my EeePC so that I could have got something done instead of just staring at the ceiling.
This was the first time the conference has been run, and it seemed that all-in-all the organizers had done a good job. Things ran more or less smoothly, the QBI building was impressive (if somewhat ostentatious), and even if some of the presentations were a bit lacking, I still found the whole experience highly informative and worthwhile (certainly moreso than JAOO!).