raspi is not the ARM I’m looking for

I got a raspberry pi a couple years ago as a christmas present from my brother-in-law. I like the idea. A cheap computer with roughly the same horsepower of my workstations of yesteryear, in a power-sipping small form factor. I want to believe, but the experience is disappointing.

First disappointment was that the accouterments for the pi (power supply, SD card, functional case, wifi adapter) cost as much as the base unit. The 1541 disk drive also cost more than a c64, so this isn’t a horrible shock, but the $40 pi price is not all-inclusive.

I christened my pi “bunnypi,” and originally had it mounted on the back of a dedicated monitor with a LaRu bunny sticker on it. My original plan was to introduce my son to old-school DOS games via DOSBox on it. This was marginally successful, since DOSBox runs only at roughly the 286-12 level, with noticeable performance glitches. This hasn’t stopped my son from playing with “the guys” in Ultima VI, or matching words in reader rabbit, but an actual 386 is more performant. ScummVM seems to run reasonably well with older titles, but is not frequently updated from upstream, so some of my favorite games remain unsupported on the pi.

Bunnypi not terribly reliable. It seems prone to overheating. A few times a year the raspbian updater seems to corrupt its own bootloader, leaving me with having to manually perform firmware and loader fixups. Part of the unreliability is just cheap hardware — the “official” power supply I received mine seems to have lost capacity over time, and plugging in the WiFi adapter caused the system to reset. It would hang at boot if the WiFi was left plugged in. I replaced the power supply with a beefier unit from my now-broken tablet, (which can source extra current needed for charging,) and it seems to be working better, although I’m still seeing periodic USB disconnect / reconnect cycles. As for the linux updates / upgrades, screwing up the boot process somehow seems par for the linux distro course. (Take distro of your choice, find the earliest version that will install on your target hardware or VM, and walk it through updates / upgrades to the latest version, and see if it makes it…)

The on-board audio of bunnypi is noisy PWM, limiting its utility for music playing. The noise is signal-correlated, so adding some noise-shaped dither might be able to help, but setting up the audio output chain is quite fiddly. Add-on boards with discrete DACs are available, or maybe a cheap $5 USB adapter would be good enough?

Bunnypi can drive my 1080 TV over HDMI, and the kids do love the screensavers, but the accelerated graphics support seems nonexistent. A little reading indicates the graphics acceleration supported on the pi is OpenGL ES, not standard OpenGL. Youtube videos of ES demos on the pi show it is capable of high-framerate graphics, but apparently doing a translation of OpenGL to OpenGL ES to get standard X screensavers either hasn’t been done yet, or is technically prohibitive. (Impossible? This is open source, right?)

USB is the only I/O available beyond GPIO and SD, so I don’t honestly know how people develop for this unless there is a cross-compile environment, emulator, or binary-compatible bigger brother that I’m not aware of. (Do people really put in giant SD cards to do raspi development? Or use USB hard-drives?)

The web browser is awkwardly slow. I installed a supposedly optimized webkit-based browser, but it wasn’t noticeably faster. And of course no flash or HTML5 video support, so no youtube.

The core use of bunnypi remains a dedicated screensaver generator.

Where are the grown-up ARM systems? Something with real I/O (PCIe, SATA, 100M+ ethernet) and ECC memory?

continuity of self-bootstrapping

I’ve been collecting build times for over a decade now, in an effort to grok how much faster newer hardware is, how much larger software is getting, and to normalize expectations between my various pieces of hardware. I use the NetBSD world as a microcosm for this, since it is fairly self-contained, and since NetBSD-2, the build process does a full bootstrap including building a (cross-)compiler. A modern Intel Romley or Grantley platform can build the NetBSD-7 amd64 world in less than 20 minutes, and is completely I/O bound. (Of course, I’m not sure when compilation has ever not been I/O bound…)

Self-hosted builds are in some sense “alive” — they beget the next version, they reproduce, and they propagate changes and grow over time. I don’t believe anybody bootstraps from complete scratch anymore these days, with hand-written hand-assembled machine code toggled directly into CPU memory into an environment that supports a macro assembler, which generates an environment that can host a rudimentary C compiler, etc. While there is a base case, it is an inductive process: developers use OS to create OS+1, or cross-compile from OS/foocpu to OS/barcpu. How far back could I go and walk this path? Could I do it across architectures? (Historically, how did things jump from PDP11 to VAX to i386?)

As I’ve been saying goodbye to my oldest hardware, I’ve been trying to get a sense of continuity from those early machines to my latest ones, and wanted to see if I could bootstrap the world on one of my oldest and slowest systems, and compare it with doing the same thing on one of my more modern systems. Modern is relative, of course. I’ve been pitting a circa 1990 12.5MHz MIPS R2000 DECStation (pmin) with 24MiB of RAM against a VM instance running on a circa 2010 3GHz AMD Phenom xII 545, both building the NetBSD 1.4.3A world. AMD (PVHVM) does a full build in 11 minutes. The same process on the pmin takes almost four days. This isn’t a direct apples-to-apples comparison, since the pmin is building NetBSD/pmax and the AMD is building NetBSD/i386, but it gives a good order-of-magnitude scale. (I should throw a 25MHz 80486 into the mix as a point for architectural normalization…)

Now for the continuity. I started running NetBSD on the pmax with 1.2, but I only ran it on DECStations until 1.4, and new architectures were always installed with binary distributions. Could I do it through source? As far as I can tell, the distributions were all compiled natively for 1.4.3. (The cross-compile setup wasn’t standardized until NetBSD-2.) Even following a native (rather than cross-compiled) source update path, there were some serious hiccups along the way: 1.4.3 (not 1.4.3A) doesn’t even compile natively for pmax, for instance. On i386, the jump from 1.4.3 to 1.5 is fiddly due to the switch from a.out to ELF formats. I spent a few evenings over winter break successfully fiddling this out on my i386 VM, recalling that a couple decades ago I was unsuccessful in making a similar jump from a.out to ELF with my Slackware Linux install. (I eventually capitulated back then and installed RedHat from binary.)

So far, I’ve gotten a 1.4.3 pmax to bootstrap 1.4.3A, and gone through the gyrations to get an 1.4.3 a.out i386 to bootstrap 1.5.3 ELF. Next step is doing 1.4.3A -> 1.5.3 on the pmax. We should then be able to do a direct comparison with 1.5.3 -> 1.6 matrix of native vs cross-compiled on both systems, and that will give me crossover continuity, since I could potentially run an i386 that has been bootstrapped from source on the pmin.

I’m also interested in the compile time scaling from 1.4.3 -> 1.4.3A -> 1.5 -> 1.5.3 -> 1.6 across multiple architectures. Is it the same for both pmin and i386? When does 24MiB start hurting? (the pmin didn’t seem overly swappy when building 1.4.3A.) Can I bring other systems (m68k, vax, alpha, sparc) to the party, too?

Some people walk a labyrinth for solace… I compile the world.