Spirit of Alpha, and the mottainai of old hardware

Mottainai is a Japanese term which roughly translates to a sense of regret or shame in throwing old objects away. Old objects (historically 100 years old) may gain spirits (Tsukumogami) and be even more wasteful to throw away. Much of my old hardware gives me these feelings.

Thus we return to my AlphaServer 1000A 5/400, which dropped not only a disk from its RAID5 years ago when I started this post, but also the spare it was going to rebuild on. Luckily I had a spare drive, and still had written notes on how to perform the (offline) component restore, and was able to move home directories to more recent hardware to lessen the stress on the array. It has been holding without losing more disk for over two years. (The uptime typically reflected long term power outages more than hardware issues.)

I dragged out the process of moving everything off the Alpha and retiring it permanently, since it still seems somehow disrespectful. It’s a huge (8U) box, eats up loads of power (250W idle), and is easily bested compute-wise by my decades-old x86 hardware. Apparently the 1000A was also not well-known for reliability, but somehow mine has worn the ages well. It was the main academic server for many years at my alma mater, and ran my primary domin services for roughly two decades after that.

I could probably have taken the disk array offline to save half the power, and for a period I was seriously examining what it would take to install a PCI SATA controller with an external disk tray. This, of course, never happened.

Tonight I migrated the last service off of it (NIS) and it has been powered down. It’s a little quieter, and I am a little sad. Alpha never got what it deserved, buried under double-acquisition over a failed bet. (IA64)

If nothing is completely broken by its offline-ness, it’s only a matter of time until it is evicted from my basement. This is perhaps the event that hurts the most; even though I cannot justify the space taken up by inanimate obsolete computers, I feel like they deserve space in a shrine of some sort, not disassembly.

bridges to past peripherals

I’ve been slowly bringing up a dual Xeon E5472 system from circa 2008 as a storage server. It has a single PCI-X slot, with the rest PCIe x4. The PCI-X slot is occupied by a 3ware escalade variant, so I have no other PCI slots available. I originally intended to run Joyent SmartOS on it for use as a dedicated storage server, possibly migrating some VMs to containers. The SmartOS kernel (nee OpenSolaris) unfortunately doesn’t support the 3ware card, even in JBOD mode, and I already deal with ZFS with Linux at work, so I figured I’d try FreeBSD. I was able to get it installed on a ZFS mirror of mismatched drives after running through a manual gauntlet, but spare SATA drives are in short supply in my basement datacentre, so I figured I’d see what else I could connect to it. (I’m holding out hope for a PCIe SCSI controller to keep some SCA drives in service.)

For kicks, I purchased a PCIe to PCI bridge, so I could install a PATA controller, and try running ZFS mirrors on some new-old-stock PATA drives. I expected the PATA controller to be minimally functional, but I’m pleasantly surprised at how well it works. Benchmark performance is comparable to within a couple percent (2% worse, in some cases 10% better) than a mirror assembled from my mismatched SATA drives. I suppose this isn’t surprising since the drives I’m testing are contemporaries, just with different interfaces. (I also expect that as I add more spindles to the PCI-X SATA controller it will continue to scale bandwidth, which ye olde IDE controller can’t physically do.)

My computing conscience pointed out that a far better use of this newly acquired SATA connectivity would be to buy some large SATA drives and copy images and/or data from the smaller obsolete drives I have been collecting for data retention purposes, and then get rid of them. Going through a few drives so far, the storage space is trivial, since all the drives of interest are < 100GB. After optionally transferring contents of and clearing a few drives, I ended up with a pile to take to the local recycler. While I was there, I picked up three 500GB WD blues to assemble into a ~1TB RAIDZ. I'm getting roughly 100MByte read and write benchmark speeds, which seems plenty fast for my purposes. The only benchmarks I have which beat it are SSDs or a (very) large (now waterfalled) fibrechannel disk array. Seems like adding a few more disks for a 6-disk RAIDZ2 could make sense, but I also have a couple 3TB drives I plan on shuffling into the array as part of my grand migration scheme.