what happened to the minicomputer?

In a presentation by Gordon Bell (formatting his):

Minicomputers (for minimal computers) are a state of mind; the current logic technology, …, are combined into a package which has the smallest cost. Almost the sole design goal is to make the cost low; …. Alternatively stated: the hardware-software tradeoffs for minicomputer design have, in the past, favored software.
HARDWARE CHARACTERISTICS
Minicomputer may be classified at least two ways:

  • It is the minimum computer (or very near it) that can be built with the state of the art technology
  • It is that computer that can be purchased for a given, relatively minimal, fixed cost (e.g., $10K in 1970.)

Does that still hold? $10k in 1970 dollars is over $61k in 2016 dollars, which would buy a comfortably equipped four-socket brickland (E7 broadwell) server, or two four-socket grantleys (E5 broadwell). We’re at least in the right order-of-magnitude.

Perhaps a better question is whether modern intel xeon platforms (like grantley or upcoming purley) are minimal computers? Bell had midi- and maxicomputer as identified categories past the minicomputer, with a supercomputer at the top.

We are definitely in the favoring-software world — modern x86 is microcoded these days, and microcontrollers are everywhere in modern server designs: power supplies; voltage regulators; fan controllers; BMC. The Xeon itself has the power control unit (PCU), and the chipset has the management engine (ME). Most of these are closed, and not directly programmable by a platform owner. Part of this is security-related — you don’t want an application being able to rewrite your voltage regulator settings or hanging the thermal management functions of your CPU. Part of it is keeping proprietary trade secrets, though. The bringup flow between the Xeon and chipset (ME) is heavily proprietary, and a deliberate decision to not support third-party chipsets by Intel has this continuing to stay in trade secret land.

However, I argue that modern servers have grown to the midi- if not maxicomputer level of complexity. Even in the embedded world, the level of integration on modern ARM parts seems to put most of them in the midicomputer category. Even AVRs seem to be climbing out of the microcomputer level.

On the server side, what if we could stop partitioning into multiple microcontrollers and coalesce their functionality? How minimal could we make a server system and still retain ring 3 ia32e (x64) compatibility? Would we still need the console-in-system BMC? Could a real-time OS on the main CPU handle its own power and thermal telemetry? What is minimally needed for bootstrapping in a secure fashion?

I’ll stop wondering about these things when I have answers, and I don’t see any. So I continue to jump down the platform architecture rabbit hole…

Now only one generation behind on wireless

I’m a big fan of wires for networking. You usually know where they go if you are the one who installed them, they are reliable for long periods of time without maintenance, and they are not typically subject to interference without physical access. They are cumbersome for battery powered devices, so although I have pulled cat5e through multiple rooms in my house, I did eventually relent, and installed an 802.11b bridge to my home network 2005. My first B bridge was based on an Atmel chipset, and I don’t remember much beyond that except that performance was really poor.

My first B wireless bridge was replaced with a commercial-grade b/g model after it became apparent that even light web-browsing was unusable with three wireless devices. The network was originally left open (but firewalled) which lasted until a neighbor’s visiting laptop during the holidays generated approximately 20,000 spam through my mail infrastructure. (my dual 50MHz sparc 20 dutifully delivered about two-thirds of them before I noticed a few hours later. Luckily I only ended up on a single blacklist site as far as I could tell, which expired a few days later.) I set a password, and went on my merry way.

The B/G configuration survived until I realized that I only had a single B holdout in the form of an old laptop which used a PCMCIA lucent orinoco silver wireless card — everything else was capable of G. The laptop was due for a hand-me-down update anyway, so it was retired and my network was configured exclusively for 802.11g. Observed network speeds jumped, and through the years more devices joined the network.

I figured with 2017 rolling around, it was time to upgrade the wireless. I figured something capable of B/G/N would be easily available, and I knew that N was capable of working in 5GHz, so I figured I would keep the existing G network in 2.4GHz, and augment with N in 5GHz. Yes, this meant having two wireless bridges, but I’d be able to cover all standards.

My wife has had an amazon kindle since the first generation (still has it, still used on occasion) but her seventh generation kindle never worked correctly with my G network, (I even tried B/G and B-only,) and it’s been kind of a sore point since she got it. It only supports N on 2.4GHz, so that nixed my idea of splitting G and N across frequency ranges, but we’re far enough away from our neighbors that channel capacity doesn’t seem too bad.

After getting N working at its new location, with new router setup, I started re-associating devices from G to N. When I was done, there weren’t any G devices left. Everything in active use already supported N.

Hah.

Now to fully decommission the old wireless router, but that’s another post…