Feeling a bit nostalgic right now that I had a DEClaser 1152 postscript laser printer left at the family business. It was liquidated when the business relocated. I was given a terse warning by an ex-co-worker to come collect things before the liquidation, but I was dealing with a newborn at the time, and didn’t give it much thought.Continue reading DEClaser 1152
The TZK10 is a DEC quarter-inch tape drive, using QIC-525-DC standard: 512MB on a DC6525 tape; 320MB on a DC6320 tape. These were my primary backup media for my 1990s-era unix cluster through the first decade of the 2000s.
Skolem was a Pentium III mobile with 128MiB RAM, a 20GiB travelstar HD, and a dock! The dock was important since the LCD was completely shot, and it had two PCI slots (with Lite-on tulip clones) which enabled routing duties. Skolem was my router for many years, taking over after my Ultra 5 started failing, and even survived a DSL to cable ISP change. Continue reading Farewell Skolem, the PIII packet-pusher
I got a Romley (dual e5-2670 Jaketowns) last November with the plan to pull in the VMs from the three Xen hosts I currently run. I’ve named it “Luxor.” It idles at around 150W, which should save me some power bill, and even though it only currently has 1TB of mirrored storage, thin LVM provisioning should allow me to stretch that a bit. It’s easily the fastest system in my house now, with the possible exception of my wife’s haswell macbook pro for single-threaded performance.
Luxor has 96GiB [now 128GiB] of memory. I think this may exceed the combined sum of all other systems I have in the house. I figured that the price of the RAM alone justified the purchase. Kismet. Looking at the memory configuration, I have six 8GiB DIMMS per socket, but the uneven DIMMs-per-channel prevents optimal interleaving across the four channels. Adding two identical DIMMs or moving two DIMMs from one socket to another should alleviate this. (I doubt it’s causing performance regressions, but given that the DIMMs are cheap and available and I plan on keeping this machine around until it becomes uneconomical to run (or past that point if history is an indicator), DIMMs to expand it to 128GiB should be arriving soon.
In mid-December, the first olde sun x2200m2 opteron (“Anaximander”) had its two VMs migrated and was shut down. A second x2200m2 (“Anaximenes,” which hosts the bulk of my infrastructure, including this site,) remains. While writing this post, a phenom II x2 545 (“Pythagoras”), had its 2TB NFS/CIFS storage migrated to my FreeBSD storage server (“Memphis”) along with some pkgsrc build VMs and secondary internal services.
Bootloader barf-bag for x86 is still in full effect. I couldn’t figure out how to PXE without booting the system in legacy BIOS mode, and I gave up trying to get the Ubuntu installer to do a GPT layout, let alone boot it. I figure I can migrate LVM volumes to new disk(s) on GPT-backed disks, install EFI grub, switch system to EFI mode, and Bob’s your uncle. (He’s my brother-in-law, but close enough.) At least that’s the plan.
The VMs on Anaximenes have been a little trickier to move, since I need to make sure I’m not creating any circular dependencies between infrastructure VMs and being able to boot Luxor itself. Can we start VMs without DHCP and DNS being up, for instance?
Systemd is a huge PITA, and isn’t able to shut down VMs cleanly, even after fiddling with the unit files to add some dependency ordering. Current theory is that it’s killing off underlying qemu instances so the VMs essentially get stuck. Running the shutdown script manually works fine and the VMs come down cleanly.
Around the turn of the millennium, my now wife and I plunked down our hard-earned money and purchased a Philips TiVo series 1 with lifetime service. We had a friend with a ReplayTV, but the TiVo interface won us over, and we enjoyed the TiVo life through roughly a decade thanks to a dual 128GB hard drive upgrade and silicon dust cache card. The updated hard drives eventually died, and we reverted back to the original 30GB drive for a few months before our directv receiver was replaced with one that had DVR capability. The directv DVR UI sucked, but it was easier than stringing IR blasters and serial control converters together to keep the TiVo going, so the TiVo ended up being neglected.
I had cozy visions of my TiVo recording kids shows from a DTV converter box as part of my basement retro media station. TiVo dropped support for series 1 guide listings in 2016. This obviously had no practical impact on me, but it still makes me a little sad.
So long, TiVo. You were a great box.
As my children have been getting older, we have started camping as a family, and this year we had an epic summer vacation with stops at multiple national parks. I’m the family chef, and so obviously cooking immediately comes to mind after shelter and clothing. When we camp, I like to provide my family with food cooked from scratch similar to what I can cook at home, and naturally I like to geek out a little over the kit I use to prepare that food.
We have what I consider a fairly complete camp cooking setup, with a commonly available coleman pack-away kitchen and a couple folding tables. These are typically set up underneath an EZ-up providing shade from sun or cover from rain, depending on the weather. There is ample counter space for prep work, while folding sinks and a sprayer make for a workable dish washing process. (I have to give credit to my sister-in-law who has shared her camping setup with our family over multiple years while camping in southern Oregon, as well as to my wife, who did the grunt-work of researching equipment lists and procuring the gear.)
We broke in our own camping setup in summer 2016. Our first trip was with some family friends to River Bend Park near Sweet Home, the second was a one-nighter to McNeil Campground near Zigzag with just our family, and the third was to Oxbow Regional Park with our daughter’s schoolmates. Other trips with the same setup followed, and it has worked out pretty well so far.
At River Bend, there were a few times when I had my dual-burner propane stove going (usually with a griddle on top) while our friends had a single-burner iosbutane backpacking stove cranking out hot water. We also had a fire pit, so my dutch oven was also used to make beer bread and pizza. The experience started me thinking about about off-grid stoves, fuels, and stirred feelings of nostalgia for campouts from my youth. There’s a whole spectrum of cooking technologies from open-fire cooking, to high-tech jetboils, and even outside of casual camping season here in the pacific northwest, the wide world of stoves continues to periodically creep back into my brain.
In contrast to the high-volume and near-meticulous duff-free River Bend, McNeil had dry twigs and cones from pines and doug firs throughout the campsite. My wife continually reminded me that twiggy duff is a common scenario throughout the tree-infested pacific northwest, so naturally my thoughts turned to wood-burning stoves. The solo stove was a first hit for web search, but I’m not backpacking with my family (at least not yet), and it seems geared more towards boiling water rather than extended controlled temperature situations needed for cooking, like simmering. The biolite stoves also seemed interesting, until I read reviews which indicated the USB charging is not terribly effective. Adding electronics to a device that could potentially be needed in an emergency off-grid situation seems an overcomplication.
The conclusion of our camping season in 2016 ended up with a very wet weekend at oxbow regional park. In my haste, I neglected the scout motto, and did not have my rain jacket or pants packed with me. We brought two EZ-ups with us, but only unpacked one to set up over our kitchen area. The wet weather came a day earlier than expected, so I didn’t have our second EZ-up set up over our tent to keep things dry. This meant packing up a day earlier than planned. Since I didn’t get to make my dutch oven beer bread at the campout, when we returned home I fired up some charcoal in my driveway so I could cook my bread at home. (I could have just cooked it indoors, but that’s not as fun.)
As the weather started turning, I felt obligated to squeeze the fading echoes of summer by cooking a couple dinners for my family outside. In contrast to car camping, I have a 20# propane tank connected to a grill at home. After a trip to gather provisions, I had my suitcase propane stove cranking away on my back patio. In my wild remodel fantasies, I would have a covered outdoor kitchen with fireproof counters which could be configured for wood stove cooking, dutch oven (charcoal) cooking, or propane. Maybe a small fireplace or hearth for a cozy fire. And since this is all make-believe, there would also be an easy path between my indoor kitchen and outdoor kitchen so I wouldn’t have to duplicate food prep areas. Or maybe a small food-prep area with a sink and multi-powered (AC, DC, propane) fridge?
2016 summary: I enjoy cooking. I enjoy cooking outdoors. I’ve got a camp kitchen bug. I can’t shake it. Some of it may have to do with being chased out of my own kitchen by children setting up train tracks all over the floor and counters, horrifying political nonsense blaring in from the other room, and I getting some sanctuary by cooking. (dancing toddlers optional.)
This post was started in 2016, and got a little out of hand over the last couple years…
Mottainai is a Japanese term which roughly translates to a sense of regret or shame in throwing old objects away. Old objects (historically 100 years old) may gain spirits (Tsukumogami) and be even more wasteful to throw away. Much of my old hardware gives me these feelings.
Thus we return to my AlphaServer 1000A 5/400, which dropped not only a disk from its RAID5 years ago when I started this post, but also the spare it was going to rebuild on. Luckily I had a spare drive, and still had written notes on how to perform the (offline) component restore, and was able to move home directories to more recent hardware to lessen the stress on the array. It has been holding without losing more disk for over two years. (The uptime typically reflected long term power outages more than hardware issues.)
I dragged out the process of moving everything off the Alpha and retiring it permanently, since it still seems somehow disrespectful. It’s a huge (8U) box, eats up loads of power (250W idle), and is easily bested compute-wise by my decades-old x86 hardware. Apparently the 1000A was also not well-known for reliability, but somehow mine has worn the ages well. It was the main academic server for many years at my alma mater, and ran my primary domin services for roughly two decades after that.
I could probably have taken the disk array offline to save half the power, and for a period I was seriously examining what it would take to install a PCI SATA controller with an external disk tray. This, of course, never happened.
Tonight I migrated the last service off of it (NIS) and it has been powered down. It’s a little quieter, and I am a little sad. Alpha never got what it deserved, buried under double-acquisition over a failed bet. (IA64)
If nothing is completely broken by its offline-ness, it’s only a matter of time until it is evicted from my basement. This is perhaps the event that hurts the most; even though I cannot justify the space taken up by inanimate obsolete computers, I feel like they deserve space in a shrine of some sort, not disassembly.
In a presentation by Gordon Bell (formatting his):
Minicomputers (for minimal computers) are a state of mind; the current logic technology, …, are combined into a package which has the smallest cost. Almost the sole design goal is to make the cost low; …. Alternatively stated: the hardware-software tradeoffs for minicomputer design have, in the past, favored software.
Minicomputer may be classified at least two ways:
- It is the minimum computer (or very near it) that can be built with the state of the art technology
- It is that computer that can be purchased for a given, relatively minimal, fixed cost (e.g., $10K in 1970.)
Does that still hold? $10k in 1970 dollars is over $61k in 2016 dollars, which would buy a comfortably equipped four-socket brickland (E7 broadwell) server, or two four-socket grantleys (E5 broadwell). We’re at least in the right order-of-magnitude.
Perhaps a better question is whether modern intel xeon platforms (like grantley or upcoming purley) are minimal computers? Bell had midi- and maxicomputer as identified categories past the minicomputer, with a supercomputer at the top.
We are definitely in the favoring-software world — modern x86 is microcoded these days, and microcontrollers are everywhere in modern server designs: power supplies; voltage regulators; fan controllers; BMC. The Xeon itself has the power control unit (PCU), and the chipset has the management engine (ME). Most of these are closed, and not directly programmable by a platform owner. Part of this is security-related — you don’t want an application being able to rewrite your voltage regulator settings or hanging the thermal management functions of your CPU. Part of it is keeping proprietary trade secrets, though. The bringup flow between the Xeon and chipset (ME) is heavily proprietary, and a deliberate decision to not support third-party chipsets by Intel has this continuing to stay in trade secret land.
However, I argue that modern servers have grown to the midi- if not maxicomputer level of complexity. Even in the embedded world, the level of integration on modern ARM parts seems to put most of them in the midicomputer category. Even AVRs seem to be climbing out of the microcomputer level.
On the server side, what if we could stop partitioning into multiple microcontrollers and coalesce their functionality? How minimal could we make a server system and still retain ring 3 ia32e (x64) compatibility? Would we still need the console-in-system BMC? Could a real-time OS on the main CPU handle its own power and thermal telemetry? What is minimally needed for bootstrapping in a secure fashion?
I’ll stop wondering about these things when I have answers, and I don’t see any. So I continue to jump down the platform architecture rabbit hole…
I’m a big fan of wires for networking. You usually know where they go if you are the one who installed them, they are reliable for long periods of time without maintenance, and they are not typically subject to interference without physical access. They are cumbersome for battery powered devices, so although I have pulled cat5e through multiple rooms in my house, I did eventually relent, and installed an 802.11b bridge to my home network 2005. My first B bridge was based on an Atmel chipset, and I don’t remember much beyond that except that performance was really poor.
My first B wireless bridge was replaced with a commercial-grade b/g model after it became apparent that even light web-browsing was unusable with three wireless devices. The network was originally left open (but firewalled) which lasted until a neighbor’s visiting laptop during the holidays generated approximately 20,000 spam through my mail infrastructure. (my dual 50MHz sparc 20 dutifully delivered about two-thirds of them before I noticed a few hours later. Luckily I only ended up on a single blacklist site as far as I could tell, which expired a few days later.) I set a password, and went on my merry way.
The B/G configuration survived until I realized that I only had a single B holdout in the form of an old laptop which used a PCMCIA lucent orinoco silver wireless card — everything else was capable of G. The laptop was due for a hand-me-down update anyway, so it was retired and my network was configured exclusively for 802.11g. Observed network speeds jumped, and through the years more devices joined the network.
I figured with 2017 rolling around, it was time to upgrade the wireless. I figured something capable of B/G/N would be easily available, and I knew that N was capable of working in 5GHz, so I figured I would keep the existing G network in 2.4GHz, and augment with N in 5GHz. Yes, this meant having two wireless bridges, but I’d be able to cover all standards.
My wife has had an amazon kindle since the first generation (still has it, still used on occasion) but her seventh generation kindle never worked correctly with my G network, (I even tried B/G and B-only,) and it’s been kind of a sore point since she got it. It only supports N on 2.4GHz, so that nixed my idea of splitting G and N across frequency ranges, but we’re far enough away from our neighbors that channel capacity doesn’t seem too bad.
After getting N working at its new location, with new router setup, I started re-associating devices from G to N. When I was done, there weren’t any G devices left. Everything in active use already supported N.
Now to fully decommission the old wireless router, but that’s another post…
ignore updates for a couple months and get hit with viagra spam. luckily the database wasn’t corrupted (that I can tell) and pre-hack versions of my posts have been restored.
this is the modern software landscape. quality. was static HTML so bad? it was limiting, at least. are there more secure ways to do this? definitely.