1. New Virtualization Host
Now that we are using KernelVirtualMachine and an AutomatedSystemInstall, getting systems into use without weeks or months of delay is finally feasible. We finally allowed mire to retire, and are now making strides toward allowing deleuze to retire as well. Unfortunately, although fritz gained some breathing room with a RAM upgrade, it will again be at capacity within a year if we continue to succeed (and we're running thin on redundancy), so we need a new machine.
The immediate uses for the new machine:
Base system as a KernelVirtualMachine, AndrewFileSystem, MitKerberos, and DNS host
- Trusted base of services that all other services rely upon
- Immediate migration of any VMs to a more powerful machine (i.e. tons of breathing room, everyone can run dynamic servers!)
Immediate setup of PostBog and PostNavajos on wheezy (Thanks to McCarthy, we can have a wheezy machine ready for members to use within minutes after running the debian-installer, pinky swear)
Over the next few months other uses appear:
- VM for misc services and admin tasks
- VM for redundant mail services
- VMs to prepare for Debian Jessie (systemd, apache 2,4, ... mean many things to test)
This means we need hardware featuring something like:
- 8+ cores
- At least 32G RAM
- Room to add many more drives (8+)
- Space for a number of VMs
- Space for afs storage
- A pair of drive bays available for SSDs once bcache works in Debian
- Space for on-site staging of backups
1.1. Possible Servers
Find a price quote, and list any machines you think would suit the task.
1.1.1. Power Edge R515
- Up to 12 3.5" hot swap SATA drives (think of the possibilities)
- Dual 8-core Opteron 4376HE (2.6GHz)
- 32G RAM (4x8G Dual Rank Low voltage 1600MHz)
- Fancy sliding rails, without a cable management arm
- 2x1TB disks
- Redundant power supply
- Only BMC
Drive carriers ($12 each)
Hybrid carrier for 2.5" drives ($23 each)
Quotes:
On 2014-05-17: $2870 pe515_2014-05-17.pdf
On 2013-07-05: $2630 pe-515_2013-07.pdf (4276HE, 1333Mhz RAM)
On 2013-01-20, $2413: pe-515.pdf (old, 6-core)
1.1.1.1. Other Considerations
- The Non-HE Opterons are faster and ~$110 less each. It's only an extra 30W per CPU... but how close are we to exceeding our circuit's power allocation (we only have 10A).
Do we need redundant power supplies? There's no benefit for datacenter failure: we only have one power circuit. We've also never had a power supply fail. But perhaps they live longer by balancing load between themselves? Or it a total waste of money. Redundant power supplies do reduce the worst-case failure mode of a server at least.
- How about that cable management arm? A search reveals folks both love and hate the things. On consideration is that we only have a quarter cabinet, so we can't feasibly install dedicated cable management tools.
- Sliding rails seem like a good deal, if only for the potential of population the other half of the memory banks when memory for the machines becomes less expensive.
2. Storage
- 2x 500G 6Gb/s SATA disk (on-hand, probably good for load balancing afs volumes)
- 2x 3TB for backups (prices: ??)
- On-site obnam repositories for afs volumes, databases, and non-debian-installed files from individual machines
2x 240-480G SSDs for bcache?
3. Backup Drives
As feared, it turns out that we cannot do an afs volume dump to the afs volume partition of the fileserver without grinding hcoop to a halt. ClintonEbadi's early test resulted in fritz thrashing itself into oblivion, and the entire site losing access to the fileserver (never fear! services recovered flawlessly on their own mere minutes after killing the dump). Neither deleuze nor mire nor hopper have the free space required for backing up all of the data we have in afs now, never mind future data.
Thus: we need a pair of drives for a third RAID1 on fritz or the new machine, dedicated to backups. We have nearly 400G in data that needs backing up now, so 1TB is the absolute minimum. However, much larger drives are not much more expensive and it's not unrealistic that we would have well over 2TB of data in need of backing up 18 months from now.
3.1. Drive Options
We have caddies for all six bays in FritzInfo plus an extra, and the new server will have 12 drive bays total so we have options.
We currently have two 500G drives that would work in the new machine, and less than 300G of space in use by all openafs volumes. However, we might also want to use those for the operating system disks or similar.
4. OpenAFS Drives
We're only using a bit under 300G of data now, and have a 1TB partition on fritz for afs. So we probably only need another RAID1 of 1TB disks, although we should investigate larger drives to make sure we hit the optimal price/GB vs predicated needs.
5. Fixing Remote Access
Or: what to do with hopper and mire?
Our KvmAccess is broken currently. The belkin kvm is working properly, but either the startech IpKvm or its power brick is dead. We are going to test it with a new power brick, but there are other reasons to get rid of it...
It turns out that all of our Dell machines support IPMI power control and serial console over Ethernet. So we could use hopper or mire as a frontend to Fritz/Deleuze/The-new-machine in addition to light duty as a secondary machine. Both of them have out-of-band remote power and console, although RebootingMireSp has a much saner interface than HopperServiceProcessor.
the Argument:
The KvmAccess system involves a cumbersome vga+ps/2 cable for each machine. These are the chief contributors to the mess in the rack. And now that all of our systems are native USB, we've had trouble with at least two different usb to ps/2 adapters leading to loss of remote keyboard. The IPKVM is also pretty useless if the machine powers off; our only recourse for rebooting is sending ctrl-alt-delete or SysRq-B.
All of our important machines (Deleuze, Fritz, New Server) support IPMI over ethernet. This gives us power control and access equivalent to a physical console via a serial tool. Both mire and hopper have two ethernet ports, so we could just swap a few cables around and have a private IPMI management lan behind either. If we really wanted to, we could also ditch the belkin kvm switch and attach the IpKvm directly to either provide a secondary out-of-band access method.
We also need to keep at least mire or hopper around for a few tasks. So we're going to have to keep one of them in the power budget until after we decommission deleuze. So, it seems sensible (assuming configuring out-of-band access to the Dell servers via IPMI works as well as it should) to use one as a console server.