coalescing the VMs

I got a Romley (dual e5-2670 Jaketowns) last November with the plan to pull in the VMs from the three Xen hosts I currently run. I’ve named it “Luxor.” It idles at around 150W, which should save me some power bill, and even though it only currently has 1TB of mirrored storage, thin LVM provisioning should allow me to stretch that a bit. It’s easily the fastest system in my house now, with the possible exception of my wife’s haswell macbook pro for single-threaded performance.

Luxor has 96GiB [now 128GiB] of memory. I think this may exceed the combined sum of all other systems I have in the house. I figured that the price of the RAM alone justified the purchase. Kismet. Looking at the memory configuration, I have six 8GiB DIMMS per socket, but the uneven DIMMs-per-channel prevents optimal interleaving across the four channels. Adding two identical DIMMs or moving two DIMMs from one socket to another should alleviate this. (I doubt it’s causing performance regressions, but given that the DIMMs are cheap and available and I plan on keeping this machine around until it becomes uneconomical to run (or past that point if history is an indicator), DIMMs to expand it to 128GiB should be arriving soon.

In mid-December, the first olde sun x2200m2 opteron (“Anaximander”) had its two VMs migrated and was shut down. A second x2200m2 (“Anaximenes,” which hosts the bulk of my infrastructure, including this site,) remains. While writing this post, a phenom II x2 545 (“Pythagoras”), had its 2TB NFS/CIFS storage migrated to my FreeBSD storage server (“Memphis”) along with some pkgsrc build VMs and secondary internal services.

Bootloader barf-bag for x86 is still in full effect. I couldn’t figure out how to PXE without booting the system in legacy BIOS mode, and I gave up trying to get the Ubuntu installer to do a GPT layout, let alone boot it. I figure I can migrate LVM volumes to new disk(s) on GPT-backed disks, install EFI grub, switch system to EFI mode, and Bob’s your uncle. (He’s my brother-in-law, but close enough.) At least that’s the plan.

The VMs on Anaximenes have been a little trickier to move, since I need to make sure I’m not creating any circular dependencies between infrastructure VMs and being able to boot Luxor itself. Can we start VMs without DHCP and DNS being up, for instance?

Systemd is a huge PITA, and isn’t able to shut down VMs cleanly, even after fiddling with the unit files to add some dependency ordering. Current theory is that it’s killing off underlying qemu instances so the VMs essentially get stuck. Running the shutdown script manually works fine and the VMs come down cleanly.

Leave a Reply