welcome: please sign in

Revision 16 as of 2009-09-21 21:24:29

Clear message
Edit

Migration2009 / HardwareUpgrade

1. General Specifications

1.1. All Machines

1.2. Core Services Machine

1.3. User Services Machine

1.4. Serial Console Server / IPKVM

We need some type of worst-case access to the physical consoles of the servers. IPKVM/KVM units are fairly expensive, and potentially don't really need everything they give us since we are not running X or anything remotely. Given that we have a nice IPKVM and KVM setup now we may want to ship that to the new data center, but then we will be running for a period of time with no equivalent to physical access remotely on our setup that is known to occasionally go down and be inaccessible.

Alternatively we could procure a serial console for a bit less money and have access to the serial consoles of every machine, which ought to be just as good as having physical keyboard/monitor access via vnc. Additionally we would gain access the the IPMI capabilities of the connected machines (which may lower the cost of each machine by $200-$300 since we could avoid buying service processors for them). If we got a fancy switch it might also have a serial console for configuration.

1.4.1. General Specs

1.4.2. Console Server

1.4.2.1. Avocent Cyclades CS 8-Port Console Server

Does not support IPMI commands it appears; unless the BMCs of the servers we get have some type of text console interface over serial this is suboptimal.

1.4.2.2. OpenGear CM400x

These are not rack mount units, but they seem to be more in line with what we need from a console server. It appears (need to check the docs more thoroughly) they support connecting to IPMI devices via the network (which it seems we can secure by restricting IPMI access to the IP of the console server) in additional to supporting direct serial consoles.

These devices also are running entirely Free Software and there is a dev kit that look reasonably easy to use so we can customize them. The manual is ambigious as to whether or not the CM400x hardware is capable of existing on multiple protocol-based tagged vlans, but ClintonEbadi has emailed Opengear requesting further information on this.

1.4.2.2.1. OpenGear CM4008

1.4.2.2.2. OpenGear CM4001

If we use Serial-over-LAN (assuming it can be secured without a dedicated management lan) for everything the CM40001 should be fine for our use.

1.4.2.3. OpenGear IM4004-5

If the CM4001 cannot coexist on multiple protocol-based vlans this looks like our best bet for a console server -- we can connect eth0 of each server to the console server's management lan and eth1 to the primary lan.

1.5. Network Switch

If we can get by with a CM4001 we should spend a bit more on a proper smart switch so that we can setup multiple vlans. Initially at least a public vlan and a private IPMI-only protocol based vlan. Later on we may want to experiment with routing database and afs traffic locally on a vlan with jumbo frames enabled (according to a cursory google this would increase database throughput but would likely have little effect on afs until openafs 1.6 is released with the new RxTCP transport layer).

1.5.1. Unmanaged

1.5.1.1. US Robotics 8-port Gigabit Switch

This looks like it will be an acceptable switch until we can afford (or need) a managed switch.

2. Shopping list

2.1. Non-Dell Vendors

From other vendors, systems comparable to the Poweredge 2970 cost...

2.2. Option A

2.3. Option B

2.3.1. Why Two PowerEdge 2970s

Although this setup would use 6U rather than 5U, the PowerEdge 2970 offers a much better price/performance ratio than the 1U R410. For a bit less than a single processor R410 we could have eight cores on both machines (avoiding a difficult/time consuming processor upgrade later on).