During the week 13–17 January, I attended the Linux.conf.au conference at the Gold Coast Convention Centre. The official theme for this year’s conference was “Who’s watching” and although two of the three keynote talks were related to digital privacy and security, I was there for the more technical stuff making up most of the conference (as were all the other attendees I think).
Below are my thoughts about the talks I saw on day one, which was the first of two days of miniconfs.
OpenZFS and Linux – Nikolai Lusan
He gave a tour of ZFS features but didn’t address any of the controversies affecting ZFS use by normal people: licensing, ARC memory usage, or ZFS as a “layering violation” (as opposed to separate block layer with LVM, LUKS, etc).
Most of the features he mentioned are not at all new to ZFS and I remembered them from playing with ZFS on FreeBSD many years ago.
The ARC in-memory cache is fixed size and he mentioned the importance of tuning it. By default it occupies half(!) of physical memory. Does this mean the ARC doesn’t cooperate with the normal Linux VFS cache? This is the kind of stuff that renders ZFS useless for normal people. A dedicated NAS maybe.
He said that the “ZFS on Linux” project is now the biggest contributor to the ZFS source code, but didn’t explain by what measure. Where are the $$$ coming for all this work and who is doing it?
He misspoke in a very confusing way when he claimed that “btrfs is deprecated now”, giving the impression it is totally dead, whereas it lives on and is being maintained and developed still. He was presumably referring to Red Hat’s announcement they would not offer support for it in RHEL (which they had never offered).
RISC-V FDPIC/NOMMU toolchain/runtime support – Maciej W. Rozycki
The first of a series of RISC-V-related talks by Western Digital. This one was jargon heavy and I could barely keep up with it and I already knew that PIC means position-independent code and what a linker relocation is. Still not sure what FDPIC means.
They are trying to define a new RISC-V ABI to work better on some kind of small embedded cores that run Linux-like processes but with no MMU (and no kernel?). Some kind of RTOS presumably. No details given about the actual motivation for this work, which means it is some secret WD product that is not disclosed yet.
The existing ABI assumes that both text and data addresses in a program are all relative to a fixed load address. For these no-MMU systems that will work but it requires multiple copies of the program to have entirely separate, consecutive copies of the text and data segments. It precludes sharing of text segments or fragmentation of data segments which might be desired in a no-MMU low-memory system.
He showed the new relocation type they are proposing in great detail. It went over my head.
RISC-V already defers a lot of work to the runtime linker, much more than any other architecture, it seems like this ABI makes that situation even worse.
RISC-V 32-bit glibc port – Alistair Francis
Another of the Western Digital roadshow. The title made this talk sound broader than it was. He was really just describing the challenge of having to handle 64-bit time_t on 32-bit RISC-V glibc. They are being forced to handle it now because just before the 32-bit RISC-V port landed in Linux, the kernel people made a rule that all new 32-bit architectures must support 64-bit time_t to address the Y2038 problem. So 32-bit RISC-V is the first architecture to be forced to deal with it. Apparently it is difficult and messy. Glibc upstream is rigorous and exacting but pleasant to work with, contrary to past reputation.
Co-developing RISC-V hypervisor support – Anup Patel
The third Western Digital talk. I didn’t get it from the title, but “hypervisor support” here means defining the actual H extension to the instruction set – it is apparently not final. And “co-developing” just means implementing code to use the H extension concurrently with defining the spec for the H extension. The idea being to avoid the “standards by committee” approach which can produce specs which are difficult to implement in real life. You validate the spec by ensuring you can build a working implementation of it before you freeze the spec.
No real detail about the actual H extension itself or the hypervisor implementation, other than a throwaway reference to OpenSBI. I had to look that up afterwards.
An audience member asked the speaker why WD is interested in Linux hypervisors (I was wondering the same thing) and the answer was evasive. There is presumably some unannounced product under development.
Microwatt: a simple Openpower soft processor – Anton Blanchard
Whereas Western Digital dominated the morning of the open ISA miniconf, IBM took the stage for the afternoon.
He spruiked IBM’s release of the Power ISA specs and patent grant in 2019. Of course none of their real CPUs are open but (presumably to get on the open ISA bandwagon and make themselves sound cool) a team from IBM has produced an FPGA soft core integer-only Power design called Microwatt. It sounds similar to the existing open source RISC-V cores like picorv32. They are targetting the same simple, small, integer-only, low performance, hobbyist size.
They have achieved some impressive sounding results in a short amount of time. The Microwatt core can run Micropython, the Zephyr RTOS, Rust, and Forth. And it has instruction-level regression tests hooked up to CI which might be a first for open source CPU designs.
Build your own open hardware CPU in 25 minutes – Anton Blanchard
A follow-on to the talk before it which introduced Microwatt. Originally supposed to be presented by two different speakers.
This was a very hands-on code-heavy walk through of adding a new fake instruction to the Microwatt core and implementing and testing it, all the way through to calling it from program code running in a simulated copy of the core. Very neat.
Microwatt microarchitecture – Paul Mackerras
He spoke like somebody with a lot of experience in CPU design. A lot of the terminology was unfamiliar to me but the Microwatt design is small and simple enough that I felt like I could still follow along. He gave an overview of the instruction pipeline flow and data flow in the Microwatt design, along with some ideas for future improvements they want to tackle. It left me feeling excited to try understanding and messing around with these open soft cores like Microwatt and picorv32.
Paying it forward: documenting your open hardware project – Sean Cross
I don’t think I was the target audience for this talk. He very briefly mentioned that LiteX is a higher-level Python DSL for producing HDL for system-on-chip designs but I think the talk was really meant for people who already use LiteX. There were some people in the audience who I don’t think realised that he wasn’t talking about Python code but rather a DSL for producing HDL.
The actual point of the talk was that writing a hardware manual for an IC (the documentation which spells out things like register definitions and pinouts and timing diagrams and other details needed for writing software) is painful and difficult to keep in sync with evolutions in the design and most open source hardware projects don’t bother.
He has written a tool which extracts Python docstrings from LiteX source code in order to generate a manual for your chip. It can produce register descriptions, IRQ details, data formats, and SVD XML.
I’m not clear how this is “paying it forward”.
Picolibc: a C library for small 32-bit systems – Keith Packard
A new project Keith has been working on for his day job doing 32-bit embedded. It is a libc for bare metal usage – not new, largely derived from newlib. He took out their OS abstraction layer (since there is no OS) and fixed up their broken tests. He got the tests running in CI on all supported targets using Qemu. He has added a lot of code focussing on code size optimisation which is often important in embedded.
In the second half of the talk he gave a quick tour of “how to bare metal embedded”.