welcome: please sign in

Diff for "Migration2009/HardwareUpgrade"

Differences between revisions 11 and 12
Revision 11 as of 2009-09-21 15:07:30
Size: 9558
Editor: ClintonEbadi
Comment: add IM4005 to console server list and notes on what kind of switch we should procure
Revision 12 as of 2009-09-21 16:20:15
Size: 6724
Editor: ClintonEbadi
Comment: move software stuff to its own page
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:

=== Hardware ===
Line 14: Line 12:
=== Software Choices ===

==== Virtualization ====

Virtualization would allow us to avoid having to dedicate an entire physical machine to the KDC/AFS server. It would also allow us to snapshot and migrate VM instances between machines in the future if needed. OpenVZ at least allows VM images to be suspended, migrated to another physical machine, and resumed with no apparant interuption to userspace (aside from network connections and such potentially timing out). This kind of flexibility would make future expansion a lot less painful.
Line 21: Line 13:

=== Hardware ===
Line 32: Line 22:
=== Software ===

 * Base operating system should just be Debian setup either as a Xen or OpenVZ server
 * Things which logically belong in separate machines go into VM images
   * KDC/AFS (and nothing else except perhaps LDAP)
   * Core Network Services
     * Domtool
     * Portal
     * Bugzilla
     * HCoop MoinMoin
     * DNS
     * SFTP (if we want to continue supporting it)
   * Mail delivery
     * Still into AFS space? At the very least users should be permitted to directly access their Maildir somehow
     * '''Note''': if we continue to use procmail users can run program on this machine; procmail should run in a restricted shell with access to a few external programs useful for mail filtering but nothing else
   * Databases
     * Dedicated partition on the smaller array for database storage (potentially with its on RAID1 in the far off future?)
Line 51: Line 23:

=== Hardware ===
Line 62: Line 32:

=== Software ===

 * Also a Xen/OpenVZ server
 * VM Images
   * Secondary KDC
     * Do we need to have a secondary AFS server with ro copies of user volumes? Or at least some core volumes all machines need?
   * Web serving
     * Should we continue to use Apache? I know it would involve rewriting the domtool Apache modules, but it doesn't seem like we use Apache more than for static file serving, url rewriting, and proxying. All of which can be done with a smaller server that will probably be easier to maintain (e.g. see our current mysterious issues that have defied all debugging)
     * Should users have direct access to this image? Perhaps we could either write a small config utility or extend domtool to enable running programs automatically in the image; users could then configure their daemons on the general user image. I can see a few issues with controlling the remote daemons, but maybe we can work this out (perhaps using runit)
   * General user access
     * Users ssh here and run whatever
     * Either just a general use shell server or combined with the web serving image
  * IMAP/Jabber
    * If we choose to not deliver mail into AFS space at least IMAP will need to go onto the core machine; Jabber is lightweight and does not present a security risk so it can just go wherever IMAP does

1. General Specifications

1.1. All Machines

  • RAID1
  • Dual Socket
    • Initially install one four-core processor in each (six-core processors are dramatically more expensive)
  • Remote reboot and console ability
    • Most of the servers I have quickly speced out appear to have minimal remote reboot and console ability built in with fancier addon cards for web interfaces and other things; we should be ok with just the baseline module.

1.2. Core Services Machine

  • 2U
  • 8GB RAM (initially)
    • We should probably use 2G or 4G modules to ensure we can upgrade to 16/32G without having to replace memory
  • Dual RAID1 Arrays (ideally room for 6 drives; 2 hot spares?)
    • Large (750G - 1TB?) array for AFS and only AFS
    • Smaller (250G?) array for OS images / databases
  • Redundant power supplies

1.3. User Services Machine

  • 1U
  • 8G RAM
    • Mostly allocated to user daemon image
  • Single RAID1 Array
    • Not too large (250G? Any smaller does not seem cost effective)
    • Local VM image disk space
    • Some amount of space for users (80G?)
      • Users who need fast local disk could request a portion of this (enforced using quotas etc.)

1.4. Serial Console Server / IPKVM

We need some type of worst-case access to the physical consoles of the servers. IPKVM/KVM units are fairly expensive, and potentially don't really need everything they give us since we are not running X or anything remotely. Given that we have a nice IPKVM and KVM setup now we may want to ship that to the new data center, but then we will be running for a period of time with no equivalent to physical access remotely on our setup that is known to occasionally go down and be inaccessible.

Alternatively we could procure a serial console for a bit less money and have access to the serial consoles of every machine, which ought to be just as good as having physical keyboard/monitor access via vnc. Additionally we would gain access the the IPMI capabilities of the connected machines (which may lower the cost of each machine by $200-$300 since we could avoid buying service processors for them). If we got a fancy switch it might also have a serial console for configuration.

1.4.1. General Specs

  • 1U/2U
  • Access to 8 machines ideally, four minimally

1.4.2. Console Server

1.4.2.1. Avocent Cyclades CS 8-Port Console Server

Does not support IPMI commands it appears; unless the BMCs of the servers we get have some type of text console interface over serial this is suboptimal.

1.4.2.2. OpenGear CM400x

These are not rack mount units, but they seem to be more in line with what we need from a console server. It appears (need to check the docs more thoroughly) they support connecting to IPMI devices via the network (which it seems we can secure by restricting IPMI access to the IP of the console server) in additional to supporting direct serial consoles.

These devices also are running entirely Free Software and there is a dev kit that look reasonably easy to use so we can customize them. The manual is ambigious as to whether or not the CM400x hardware is capable of existing on multiple protocol-based tagged vlans, but ClintonEbadi has emailed Opengear requesting further information on this.

1.4.2.2.1. OpenGear CM4008

1.4.2.2.2. OpenGear CM4001

If we use Serial-over-LAN (assuming it can be secured without a dedicated management lan) for everything the CM40001 should be fine for our use.

1.4.2.3. OpenGear IM4004-5

If the CM4001 cannot coexist on multiple protocol-based vlans this looks like our best bet for a console server -- we can connect eth0 of each server to the console server's management lan and eth1 to the primary lan.

1.5. Network Switch

  • 1U
  • Gigabit
  • 8- or 16- ports

If we can get by with a CM4001 we should spend a bit more on a proper smart switch so that we can setup multiple vlans. Initially at least a public vlan and a private IPMI-only protocol based vlan. Later on we may want to experiment with routing database and afs traffic locally on a vlan with jumbo frames enabled (according to a cursory google this would increase database throughput but would likely have little effect on afs until openafs 1.6 is released with the new RxTCP transport layer).

2. Shopping list

  • Dell poweredge R410

    • Intel® Xeon® E5520, 2.26Ghz, 8M Cache, Turbo, HT, 1066MHz Max Mem

    • 1GB ram
    • Only one of the cheapest 160GB hard drives
    • Baseboard Management Controller
    • no CD drive (we'll boot from USB)
    • 1-year warranty
    • Price: $1050

  • PowerEdge 2970

    • Quad Core AMD Opteron™ 2372HE 2.1GHz 4x512K Cache 1Ghz HyperTrnsprt

    • SAS 6/iR Integrated, x6 Backplane, 1x6 Backplane for 3.5-inch Hard Drives
    • One default 160GB HD
    • No CD/DVD
    • 3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite (cheapest)
    • Price: $709

  • Ram
    • 4GB DRR3 x 2 for user services (2 x $121.49)

    • 4GB DRR2 x 2 for core services (2 x $101.49)

  • Drives
    • 1TB x 2 for core services(2 x $74.99)

    • 500GB x 4 (4 x $54.99)

  • Serial console server
    • $400
  • Total price: 2974.9 + various other shipping/tax/small things we might need to pay

Migration2009/HardwareUpgrade (last edited 2009-12-04 18:31:26 by DavorOcelic)