https://en.wikipedia.org/wiki/Mottainai is a Japanese term which roughly translates to a sense of regret or shame in throwing old objects away. Old objects (historically 100 years old) may gain spirits (https://en.wikipedia.org/wiki/Tsukumogami) and be even more wasteful to throw away.
Thus we return to my AlphaServer 1000A 5/400, which dropped not only a disk from its RAID5, but also the spare it was going to rebuild on. Luckily I had a spare drive, and still had written notes on how to perform the (offline) component restore, and was able to move home directories to more recent hardware to lessen the stress on the array. It has been holding without losing more disk for over a year. (in fact, the current uptime reflects that.)
I should really move everything off the Alpha and retire it permanently, but it seems somehow disrespectful. It’s a huge (8U) box, eats up loads of power (250W idle), and is easily bested compute-wise by my decades-old x86 hardware. Apparently the 1000A was also not well-known for reliability, but somehow mine has worn the ages well.
I bet I could take the disk array offline and save half the power. hmm…
ignore updates for a couple months and get hit with viagra spam. luckily the database wasn’t corrupted (that I can tell) and pre-hack versions of my posts have been restored.
this is the modern software landscape. quality. was static HTML so bad? it was limiting, at least. are there more secure ways to do this? definitely.
I’ve been slowly bringing up a dual Xeon E5472 system from circa 2008 as a storage server. It has a single PCI-X slot, with the rest PCIe x4. The PCI-X slot is occupied by a 3ware escalade variant, so I have no other PCI slots available. I originally intended to run Joyent SmartOS on it for use as a dedicated storage server, possibly migrating some VMs to containers. The SmartOS kernel (nee OpenSolaris) unfortunately doesn’t support the 3ware card, even in JBOD mode, and I already deal with ZFS with Linux at work, so I figured I’d try FreeBSD. I was able to get it installed on a ZFS mirror of mismatched drives after running through a manual gauntlet, but spare SATA drives are in short supply in my basement datacentre, so I figured I’d see what else I could connect to it. (I’m holding out hope for a PCIe SCSI controller to keep some SCA drives in service.)
For kicks, I purchased a PCIe to PCI bridge, so I could install a PATA controller, and try running ZFS mirrors on some new-old-stock PATA drives. I expected the PATA controller to be minimally functional, but I’m pleasantly surprised at how well it works. Benchmark performance is comparable to within a couple percent (2% worse, in some cases 10% better) than a mirror assembled from my mismatched SATA drives. I suppose this isn’t surprising since the drives I’m testing are contemporaries, just with different interfaces. (I also expect that as I add more spindles to the PCI-X SATA controller it will continue to scale bandwidth, which ye olde IDE controller can’t physically do.)
My computing conscience pointed out that a far better use of this newly acquired SATA connectivity would be to buy some large SATA drives and copy images and/or data from the smaller obsolete drives I have been collecting for data retention purposes, and then get rid of them. Going through a few drives so far, the storage space is trivial, since all the drives of interest are < 100GB.
After optionally transferring contents of and clearing a few drives, I ended up with a pile to take to the local recycler. While I was there, I picked up three 500GB WD blues to assemble into a ~1TB RAIDZ. I'm getting roughly 100MByte read and write benchmark speeds, which seems plenty fast for my purposes. The only benchmarks I have which beat it are SSDs or a (very) large (now waterfalled) fibrechannel disk array. Seems like adding a few more disks for a 6-disk RAIDZ2 could make sense, but I also have a couple 3TB drives I plan on shuffling into the array as part of my grand migration scheme.