Camp Kitchen Bug

As my children have been getting older, we have started camping as a family, and this year we had an epic summer vacation with stops at multiple national parks. I’m the family chef, and so obviously cooking immediately comes to mind after shelter and clothing. When we camp, I like to provide my family with food cooked from scratch similar to what I can cook at home, and naturally I like to geek out a little over the kit I use to prepare that food.

We have what I consider a fairly complete camp cooking setup, with a commonly available coleman pack-away kitchen and a couple folding tables. These are typically set up underneath an EZ-up providing shade from sun or cover from rain, depending on the weather. There is ample counter space for prep work, while folding sinks and a sprayer make for a workable dish washing process. (I have to give credit to my sister-in-law who has shared her camping setup with our family over multiple years while camping in southern Oregon, as well as to my wife, who did the grunt-work of researching equipment lists and procuring the gear.)

We broke in our own camping setup in summer 2016. Our first trip was with some family friends to River Bend Park near Sweet Home, the second was a one-nighter to McNeil Campground near Zigzag with just our family, and the third was to Oxbow Regional Park with our daughter’s schoolmates. Other trips with the same setup followed, and it has worked out pretty well so far.

At River Bend, there were a few times when I had my dual-burner propane stove going (usually with a griddle on top) while our friends had a single-burner iosbutane backpacking stove cranking out hot water. We also had a fire pit, so my dutch oven was also used to make beer bread and pizza. The experience started me thinking about about off-grid stoves, fuels, and stirred feelings of nostalgia for campouts from my youth. There’s a whole spectrum of cooking technologies from open-fire cooking, to high-tech jetboils, and even outside of casual camping season here in the pacific northwest, the wide world of stoves continues to periodically creep back into my brain.

In contrast to the high-volume and near-meticulous duff-free River Bend, McNeil had dry twigs and cones from pines and doug firs throughout the campsite. My wife continually reminded me that twiggy duff is a common scenario throughout the tree-infested pacific northwest, so naturally my thoughts turned to wood-burning stoves. The solo stove was a first hit for web search, but I’m not backpacking with my family (at least not yet), and it seems geared more towards boiling water rather than extended controlled temperature situations needed for cooking, like simmering. The biolite stoves also seemed interesting, until I read reviews which indicated the USB charging is not terribly effective. Adding electronics to a device that could potentially be needed in an emergency off-grid situation seems an overcomplication.

The conclusion of our camping season in 2016 ended up with a very wet weekend at oxbow regional park. In my haste, I neglected the scout motto, and did not have my rain jacket or pants packed with me. We brought two EZ-ups with us, but only unpacked one to set up over our kitchen area. The wet weather came a day earlier than expected, so I didn’t have our second EZ-up set up over our tent to keep things dry. This meant packing up a day earlier than planned. Since I didn’t get to make my dutch oven beer bread at the campout, when we returned home I fired up some charcoal in my driveway so I could cook my bread at home. (I could have just cooked it indoors, but that’s not as fun.)

As the weather started turning, I felt obligated to squeeze the fading echoes of summer by cooking a couple dinners for my family outside. In contrast to car camping, I have a 20# propane tank connected to a grill at home. After a trip to gather provisions, I had my suitcase propane stove cranking away on my back patio. In my wild remodel fantasies, I would have a covered outdoor kitchen with fireproof counters which could be configured for wood stove cooking, dutch oven (charcoal) cooking, or propane. Maybe a small fireplace or hearth for a cozy fire. And since this is all make-believe, there would also be an easy path between my indoor kitchen and outdoor kitchen so I wouldn’t have to duplicate food prep areas. Or maybe a small food-prep area with a sink and multi-powered (AC, DC, propane) fridge?

2016 summary: I enjoy cooking. I enjoy cooking outdoors. I’ve got a camp kitchen bug. I can’t shake it. Some of it may have to do with being chased out of my own kitchen by children setting up train tracks all over the floor and counters, horrifying political nonsense blaring in from the other room, and I getting some sanctuary by cooking. (dancing toddlers optional.)

This post was started in 2016, and got a little out of hand over the last couple years…

Spirit of Alpha, and the mottainai of old hardware

Mottainai is a Japanese term which roughly translates to a sense of regret or shame in throwing old objects away. Old objects (historically 100 years old) may gain spirits (Tsukumogami) and be even more wasteful to throw away. Much of my old hardware gives me these feelings.

Thus we return to my AlphaServer 1000A 5/400, which dropped not only a disk from its RAID5 years ago when I started this post, but also the spare it was going to rebuild on. Luckily I had a spare drive, and still had written notes on how to perform the (offline) component restore, and was able to move home directories to more recent hardware to lessen the stress on the array. It has been holding without losing more disk for over two years. (The uptime typically reflected long term power outages more than hardware issues.)

I dragged out the process of moving everything off the Alpha and retiring it permanently, since it still seems somehow disrespectful. It’s a huge (8U) box, eats up loads of power (250W idle), and is easily bested compute-wise by my decades-old x86 hardware. Apparently the 1000A was also not well-known for reliability, but somehow mine has worn the ages well. It was the main academic server for many years at my alma mater, and ran my primary domin services for roughly two decades after that.

I could probably have taken the disk array offline to save half the power, and for a period I was seriously examining what it would take to install a PCI SATA controller with an external disk tray. This, of course, never happened.

Tonight I migrated the last service off of it (NIS) and it has been powered down. It’s a little quieter, and I am a little sad. Alpha never got what it deserved, buried under double-acquisition over a failed bet. (IA64)

If nothing is completely broken by its offline-ness, it’s only a matter of time until it is evicted from my basement. This is perhaps the event that hurts the most; even though I cannot justify the space taken up by inanimate obsolete computers, I feel like they deserve space in a shrine of some sort, not disassembly.

what happened to the minicomputer?

In a presentation by Gordon Bell (formatting his):

Minicomputers (for minimal computers) are a state of mind; the current logic technology, …, are combined into a package which has the smallest cost. Almost the sole design goal is to make the cost low; …. Alternatively stated: the hardware-software tradeoffs for minicomputer design have, in the past, favored software.
HARDWARE CHARACTERISTICS
Minicomputer may be classified at least two ways:

  • It is the minimum computer (or very near it) that can be built with the state of the art technology
  • It is that computer that can be purchased for a given, relatively minimal, fixed cost (e.g., $10K in 1970.)

Does that still hold? $10k in 1970 dollars is over $61k in 2016 dollars, which would buy a comfortably equipped four-socket brickland (E7 broadwell) server, or two four-socket grantleys (E5 broadwell). We’re at least in the right order-of-magnitude.

Perhaps a better question is whether modern intel xeon platforms (like grantley or upcoming purley) are minimal computers? Bell had midi- and maxicomputer as identified categories past the minicomputer, with a supercomputer at the top.

We are definitely in the favoring-software world — modern x86 is microcoded these days, and microcontrollers are everywhere in modern server designs: power supplies; voltage regulators; fan controllers; BMC. The Xeon itself has the power control unit (PCU), and the chipset has the management engine (ME). Most of these are closed, and not directly programmable by a platform owner. Part of this is security-related — you don’t want an application being able to rewrite your voltage regulator settings or hanging the thermal management functions of your CPU. Part of it is keeping proprietary trade secrets, though. The bringup flow between the Xeon and chipset (ME) is heavily proprietary, and a deliberate decision to not support third-party chipsets by Intel has this continuing to stay in trade secret land.

However, I argue that modern servers have grown to the midi- if not maxicomputer level of complexity. Even in the embedded world, the level of integration on modern ARM parts seems to put most of them in the midicomputer category. Even AVRs seem to be climbing out of the microcomputer level.

On the server side, what if we could stop partitioning into multiple microcontrollers and coalesce their functionality? How minimal could we make a server system and still retain ring 3 ia32e (x64) compatibility? Would we still need the console-in-system BMC? Could a real-time OS on the main CPU handle its own power and thermal telemetry? What is minimally needed for bootstrapping in a secure fashion?

I’ll stop wondering about these things when I have answers, and I don’t see any. So I continue to jump down the platform architecture rabbit hole…

Now only one generation behind on wireless

I’m a big fan of wires for networking. You usually know where they go if you are the one who installed them, they are reliable for long periods of time without maintenance, and they are not typically subject to interference without physical access. They are cumbersome for battery powered devices, so although I have pulled cat5e through multiple rooms in my house, I did eventually relent, and installed an 802.11b bridge to my home network 2005. My first B bridge was based on an Atmel chipset, and I don’t remember much beyond that except that performance was really poor.

My first B wireless bridge was replaced with a commercial-grade b/g model after it became apparent that even light web-browsing was unusable with three wireless devices. The network was originally left open (but firewalled) which lasted until a neighbor’s visiting laptop during the holidays generated approximately 20,000 spam through my mail infrastructure. (my dual 50MHz sparc 20 dutifully delivered about two-thirds of them before I noticed a few hours later. Luckily I only ended up on a single blacklist site as far as I could tell, which expired a few days later.) I set a password, and went on my merry way.

The B/G configuration survived until I realized that I only had a single B holdout in the form of an old laptop which used a PCMCIA lucent orinoco silver wireless card — everything else was capable of G. The laptop was due for a hand-me-down update anyway, so it was retired and my network was configured exclusively for 802.11g. Observed network speeds jumped, and through the years more devices joined the network.

I figured with 2017 rolling around, it was time to upgrade the wireless. I figured something capable of B/G/N would be easily available, and I knew that N was capable of working in 5GHz, so I figured I would keep the existing G network in 2.4GHz, and augment with N in 5GHz. Yes, this meant having two wireless bridges, but I’d be able to cover all standards.

My wife has had an amazon kindle since the first generation (still has it, still used on occasion) but her seventh generation kindle never worked correctly with my G network, (I even tried B/G and B-only,) and it’s been kind of a sore point since she got it. It only supports N on 2.4GHz, so that nixed my idea of splitting G and N across frequency ranges, but we’re far enough away from our neighbors that channel capacity doesn’t seem too bad.

After getting N working at its new location, with new router setup, I started re-associating devices from G to N. When I was done, there weren’t any G devices left. Everything in active use already supported N.

Hah.

Now to fully decommission the old wireless router, but that’s another post…

bridges to past peripherals

I’ve been slowly bringing up a dual Xeon E5472 system from circa 2008 as a storage server. It has a single PCI-X slot, with the rest PCIe x4. The PCI-X slot is occupied by a 3ware escalade variant, so I have no other PCI slots available. I originally intended to run Joyent SmartOS on it for use as a dedicated storage server, possibly migrating some VMs to containers. The SmartOS kernel (nee OpenSolaris) unfortunately doesn’t support the 3ware card, even in JBOD mode, and I already deal with ZFS with Linux at work, so I figured I’d try FreeBSD. I was able to get it installed on a ZFS mirror of mismatched drives after running through a manual gauntlet, but spare SATA drives are in short supply in my basement datacentre, so I figured I’d see what else I could connect to it. (I’m holding out hope for a PCIe SCSI controller to keep some SCA drives in service.)

For kicks, I purchased a PCIe to PCI bridge, so I could install a PATA controller, and try running ZFS mirrors on some new-old-stock PATA drives. I expected the PATA controller to be minimally functional, but I’m pleasantly surprised at how well it works. Benchmark performance is comparable to within a couple percent (2% worse, in some cases 10% better) than a mirror assembled from my mismatched SATA drives. I suppose this isn’t surprising since the drives I’m testing are contemporaries, just with different interfaces. (I also expect that as I add more spindles to the PCI-X SATA controller it will continue to scale bandwidth, which ye olde IDE controller can’t physically do.)

My computing conscience pointed out that a far better use of this newly acquired SATA connectivity would be to buy some large SATA drives and copy images and/or data from the smaller obsolete drives I have been collecting for data retention purposes, and then get rid of them. Going through a few drives so far, the storage space is trivial, since all the drives of interest are < 100GB. After optionally transferring contents of and clearing a few drives, I ended up with a pile to take to the local recycler. While I was there, I picked up three 500GB WD blues to assemble into a ~1TB RAIDZ. I'm getting roughly 100MByte read and write benchmark speeds, which seems plenty fast for my purposes. The only benchmarks I have which beat it are SSDs or a (very) large (now waterfalled) fibrechannel disk array. Seems like adding a few more disks for a 6-disk RAIDZ2 could make sense, but I also have a couple 3TB drives I plan on shuffling into the array as part of my grand migration scheme.

raspi is not the ARM I’m looking for

I got a raspberry pi a couple years ago as a christmas present from my brother-in-law. I like the idea. A cheap computer with roughly the same horsepower of my workstations of yesteryear, in a power-sipping small form factor. I want to believe, but the experience is disappointing.

First disappointment was that the accouterments for the pi (power supply, SD card, functional case, wifi adapter) cost as much as the base unit. The 1541 disk drive also cost more than a c64, so this isn’t a horrible shock, but the $40 pi price is not all-inclusive.

I christened my pi “bunnypi,” and originally had it mounted on the back of a dedicated monitor with a LaRu bunny sticker on it. My original plan was to introduce my son to old-school DOS games via DOSBox on it. This was marginally successful, since DOSBox runs only at roughly the 286-12 level, with noticeable performance glitches. This hasn’t stopped my son from playing with “the guys” in Ultima VI, or matching words in reader rabbit, but an actual 386 is more performant. ScummVM seems to run reasonably well with older titles, but is not frequently updated from upstream, so some of my favorite games remain unsupported on the pi.

Bunnypi not terribly reliable. It seems prone to overheating. A few times a year the raspbian updater seems to corrupt its own bootloader, leaving me with having to manually perform firmware and loader fixups. Part of the unreliability is just cheap hardware — the “official” power supply I received mine seems to have lost capacity over time, and plugging in the WiFi adapter caused the system to reset. It would hang at boot if the WiFi was left plugged in. I replaced the power supply with a beefier unit from my now-broken tablet, (which can source extra current needed for charging,) and it seems to be working better, although I’m still seeing periodic USB disconnect / reconnect cycles. As for the linux updates / upgrades, screwing up the boot process somehow seems par for the linux distro course. (Take distro of your choice, find the earliest version that will install on your target hardware or VM, and walk it through updates / upgrades to the latest version, and see if it makes it…)

The on-board audio of bunnypi is noisy PWM, limiting its utility for music playing. The noise is signal-correlated, so adding some noise-shaped dither might be able to help, but setting up the audio output chain is quite fiddly. Add-on boards with discrete DACs are available, or maybe a cheap $5 USB adapter would be good enough?

Bunnypi can drive my 1080 TV over HDMI, and the kids do love the screensavers, but the accelerated graphics support seems nonexistent. A little reading indicates the graphics acceleration supported on the pi is OpenGL ES, not standard OpenGL. Youtube videos of ES demos on the pi show it is capable of high-framerate graphics, but apparently doing a translation of OpenGL to OpenGL ES to get standard X screensavers either hasn’t been done yet, or is technically prohibitive. (Impossible? This is open source, right?)

USB is the only I/O available beyond GPIO and SD, so I don’t honestly know how people develop for this unless there is a cross-compile environment, emulator, or binary-compatible bigger brother that I’m not aware of. (Do people really put in giant SD cards to do raspi development? Or use USB hard-drives?)

The web browser is awkwardly slow. I installed a supposedly optimized webkit-based browser, but it wasn’t noticeably faster. And of course no flash or HTML5 video support, so no youtube.

The core use of bunnypi remains a dedicated screensaver generator.

Where are the grown-up ARM systems? Something with real I/O (PCIe, SATA, 100M+ ethernet) and ECC memory?

continuity of self-bootstrapping

I’ve been collecting build times for over a decade now, in an effort to grok how much faster newer hardware is, how much larger software is getting, and to normalize expectations between my various pieces of hardware. I use the NetBSD world as a microcosm for this, since it is fairly self-contained, and since NetBSD-2, the build process does a full bootstrap including building a (cross-)compiler. A modern Intel Romley or Grantley platform can build the NetBSD-7 amd64 world in less than 20 minutes, and is completely I/O bound. (Of course, I’m not sure when compilation has ever not been I/O bound…)

Self-hosted builds are in some sense “alive” — they beget the next version, they reproduce, and they propagate changes and grow over time. I don’t believe anybody bootstraps from complete scratch anymore these days, with hand-written hand-assembled machine code toggled directly into CPU memory into an environment that supports a macro assembler, which generates an environment that can host a rudimentary C compiler, etc. While there is a base case, it is an inductive process: developers use OS to create OS+1, or cross-compile from OS/foocpu to OS/barcpu. How far back could I go and walk this path? Could I do it across architectures? (Historically, how did things jump from PDP11 to VAX to i386?)

As I’ve been saying goodbye to my oldest hardware, I’ve been trying to get a sense of continuity from those early machines to my latest ones, and wanted to see if I could bootstrap the world on one of my oldest and slowest systems, and compare it with doing the same thing on one of my more modern systems. Modern is relative, of course. I’ve been pitting a circa 1990 12.5MHz MIPS R2000 DECStation (pmin) with 24MiB of RAM against a VM instance running on a circa 2010 3GHz AMD Phenom xII 545, both building the NetBSD 1.4.3A world. AMD (PVHVM) does a full build in 11 minutes. The same process on the pmin takes almost four days. This isn’t a direct apples-to-apples comparison, since the pmin is building NetBSD/pmax and the AMD is building NetBSD/i386, but it gives a good order-of-magnitude scale. (I should throw a 25MHz 80486 into the mix as a point for architectural normalization…)

Now for the continuity. I started running NetBSD on the pmax with 1.2, but I only ran it on DECStations until 1.4, and new architectures were always installed with binary distributions. Could I do it through source? As far as I can tell, the distributions were all compiled natively for 1.4.3. (The cross-compile setup wasn’t standardized until NetBSD-2.) Even following a native (rather than cross-compiled) source update path, there were some serious hiccups along the way: 1.4.3 (not 1.4.3A) doesn’t even compile natively for pmax, for instance. On i386, the jump from 1.4.3 to 1.5 is fiddly due to the switch from a.out to ELF formats. I spent a few evenings over winter break successfully fiddling this out on my i386 VM, recalling that a couple decades ago I was unsuccessful in making a similar jump from a.out to ELF with my Slackware Linux install. (I eventually capitulated back then and installed RedHat from binary.)

So far, I’ve gotten a 1.4.3 pmax to bootstrap 1.4.3A, and gone through the gyrations to get an 1.4.3 a.out i386 to bootstrap 1.5.3 ELF. Next step is doing 1.4.3A -> 1.5.3 on the pmax. We should then be able to do a direct comparison with 1.5.3 -> 1.6 matrix of native vs cross-compiled on both systems, and that will give me crossover continuity, since I could potentially run an i386 that has been bootstrapped from source on the pmin.

I’m also interested in the compile time scaling from 1.4.3 -> 1.4.3A -> 1.5 -> 1.5.3 -> 1.6 across multiple architectures. Is it the same for both pmin and i386? When does 24MiB start hurting? (the pmin didn’t seem overly swappy when building 1.4.3A.) Can I bring other systems (m68k, vax, alpha, sparc) to the party, too?

Some people walk a labyrinth for solace… I compile the world.

modems added to the ice floe

I added a couple telebit trailblazers to the ice floe a couple days ago, and tonight my US Robotics courier HST.

My father purchased a Kyocera 1200 bps modem for our family’s Leading Edge model D, with the hope that my mom could use it for her transcription and word-processing business. I used it to call BBSes. It took at least a year before I figured out how to get file transfers working with the included Microsoft Access comm program. (Not Microsoft Access the database — Access the comm program!) I downloaded Procomm with the Xmodem-checksum protocol, then later Telix (with Zmodem).

I saved my paper route money to buy a 2400bps modem. I did ANSI. I ran a BBS. I saved more paper route money and got at 14.4k courier HST through a local sysop of a large multi-line BBS. In the early 90s it was cheaper for me to call across the country in the middle of the night with a budget long-distance provider than to call to the more remote areas of my own area code, but that’s the subject of another post…

When I arrived in college, the sysadmin there knew I had run a BBS, beckoned me to the the sub-basement, and handed me a Xylogics terminal server. “You can make this work, right?” I first configured it to replace the old Cisco STS-10, providing direct text logins for students and alums. I opened up PPP connections a couple months later, and wrote an awk script to parse the log files and identify freeloaders. As a staff member, I of course never showed up on the freeloader list, even though I left my connection up 24/7, phone line connection permitting.

During a year break from college, I was employed at a large regional ISP as a system operator, junior to the sysadmins. I did the grunt-work of hard-resetting (yanking and re-seating) failing modems from the 800+ lines in our local POP, and directing our field guy to busy-out or replace modems that appeared to be broken at the frame-relay-connected remote POPs.

A couple years later I replaced my nailed-up V34 modem with a DSL connection, first CAP and later DMT. When the telco started interfering with their own DSL connections and the combination of video streaming and work-related VPN needs started outstripping DSL, I moved to a cablemodem.

I originally kept my modems with the intent of setting up a backup UUCP connection for my email, as I had provided others in college. Since moving jobs to corpoland, I no longer have control over a remote PSTN line, so can’t set up my own out-of-band UUCP connection. I no longer have a POTS line at home. I suspect that modems over VOIP do not fare well, although V.MOIP is supposed to address this. In any case, the sunset on modems designed to work over the PSTN has long since passed, and so it’s time to say goodbye.

I don’t even have another HST modem to dial in order to capture handshake audio, and a cursory search on the internet doesn’t reveal any such recordings. My HST has spent over a decade in a box, and the last time I fired it up, the NVRAM was completely shot, and it’s not like I have anywhere to dial anymore.

The option of simulating old tyme internet over a serial connection is always available by using a null modem cable. Latency will obviously be better, but it’s just a simulation. screeching handshake not included.

the kids have met spinning media

My children have met spinning media. I play games with them on my c64, so they know what floppy drives are. I play vinyl records for them. They have a small DVD collection of movies. Tonight we took apart a couple hard drives so I could show them the insides. They enjoy using screwdrivers.

First up was a full-height 1.6GB Seagate PA4E1B 5.25″ drive. We weren’t able to get the lid off, but they could see the drive arms and all the platters. Ten of them. Eighteen heads on the arm. (Later, with a hammer and screwdriver, I was able to get the lid off.)

We then moved to a 3.5″ 52MB Quantum Prodrive 52S. When the top of the drive came off, my daughter recognized the configuration of the head and arm over the platter. “It looks like a record,” she said. Two heads, and an optical detector for the tracks, rather than using servo tracks. I now wish I had fired it up and listened to it before disassembly, as I suspect it may have had a unique sound.

The largest drives I have now in my home datacenter are 3TB. MicroSD cards sold at the checkout lanes at my local supermarket can hold more data than the drives we disassembled in a fraction of the physical space, with orders of magnitude less power consumption. SSDs are catching up to spinning rust in capacity, and Intel’s recently announced non-volatile memory pushes densities even higher. It’s possible my kids will never have to delete data in their adult lives — data would get marked as trash, but would still technically available for retrieval “just in case” because the cost savings of actually reclaiming the space used by data will be negligible.

I had a Xerox 820-II CP/M machine with 8″ floppies that stored close to 1MB of data. My family had a PC with a 30MB hard drive, and I remember being in awe in the early 90s thinking about 1GB hard drives that cost around $1k. I bought a 179MB drive in high school with stipend money, and scrounged drives of various sizes throughout college. I don’t remember the first drive > 1GB that I owned — very few have survived. I vaguely recall a jump from hundreds of MB to tens of GB that happened in the early 2000s. All spinning media.

All slowly succumbing to mechanical wear-out, or more simply, obsolescence.