welcome: please sign in

Diff for "Hardware"

Differences between revisions 47 and 53 (spanning 6 versions)
Revision 47 as of 2006-07-03 11:46:02
Size: 15800
Comment: Added note about redundant power supplies.
Revision 53 as of 2006-07-06 20:55:24
Size: 4106
Comment: updated link to hardware config
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= New System Hardware =
During the HCoop IRC meeting on June 24, 2006, the group decided that it would based it's new system architecture on the following pieces of hardware:

 * Two robust servers, one that doesn't allow normal user logins, and one that does.
 * One switch to form a LAN between these servers.
 * One serial port device, to facilitate remote access to our servers.
Also, it was mentioned that we should research hardware support contracts from any vendor that will be selling us equipment.

Additionally, group decided that the server that HCoop currently owns, Abulafia, will be brought to he.net for shell service. This will follow a necessary re-load of the OS software at a time to be determined later.
This page collects information on the hardware we plan to install at a colocation provider as part of our new hosting infrastructure. Some older discussion and similar stuff is on NewSystemHardwareArchive.
Line 12: Line 4:
Line 16: Line 7:
Line 19: Line 9:
 * Disks: 4 x 10K Seagate Cheetah SCSI drives, 73GB '''and''' * 2 x 10K Seagate Cheetah SCSI drives, 36GB  * Disks:
  *
4 x 10K Seagate Cheetah SCSI drives, 73GB '''and'''
 
* 2 x 10K Seagate Cheetah SCSI drives, 36GB
Line 21: Line 13:
Line 25: Line 16:
Line 30: Line 20:
Line 34: Line 23:

 * Processor: 2 x 3.2 GHz
 * RAM: 4 GB RAM
 * Disks: 2 x 73 RAID 1 SCSI
 * Extra: Should be 1U. Goal is to make it processor intensive, and only disk-heavy enough to ensure a high level of uptime. Other considerations such as a preference for AMD and the vendor Penguin Computing that have been expressed on the list and in meetings should be followed here.

The model that I like best for this machine of the previous configurations is the Penguin Altus 1400. It is a 1U machine with hardware SCSI RAID listing for $3463 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.pdf (PDF)]

We could go with SATA, but I would only do this if we can do it with hot-swappable disks in a 1U case. I suppose that SATA could be a better choice even if it isn't hot-swappable once we get a cluster of web servers... maybe we can get SATA now and live with the consequences until we reach this point. Ideas?

I think that we should also negotiate with Penguin on the phone over this configuration.
 * Processor: Up for debate
 * RAM: 4 GB RAM (2 x 2 GB)
 * Disks: 2 x 120 GB SATA
 * 3 year, on-site hardware support (only adds $119)
 * Penguin Model: Altus 1300
 * List price: $3163.00
 * Extra: Should be 1U. Goal is to make it processor intensive, and only disk-heavy enough to ensure a high level of uptime. Other considerations such as a preference for AMD and the vendor Penguin Computing that have been expressed on the list and in meetings should be followed here. More information from online configuration: [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_1300_20060606_3.pdf (PDF)] [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_1300_20060606_3.ps (PS)]
Line 48: Line 33:
---- == Switch ==
We are proceeding under the assumption that we'll use ShaunEmpie's donation (see HardwareDonations), a Nortel (Baystack) 380 switch. He says:
Line 50: Line 36:
This page will serve as a forum for collaborative research on the pieces of hardware that we need. It is not brand new but is working. Here is [http://vpit.net/es380-guide.pdf a guide] that I was able to find to give anyone interested a more in depth view of it.
Line 52: Line 38:
'''Some members are willing to donate pieces of hardware that fit our needs. See the page HardwareDonations for more information. Also please check the power guidelines for he.net as this is something that we will have to work around, either by going to another colocation site or hosting less equipment to start out at he.net.''' VLAN Configuration Proposal:
Line 54: Line 40:
== He.net Power/Rack Guidelines ==
I asked he.net about the type of racks they used, and asked for additional information. They told us to plan on the following:
{{{
With our new setup, I think it would be best to setup a few different
VLANs for different uses. For anyone who is unfamiliar with the term, a
VLAN is a virtual lan. It allows you to have completely separate networks
on the same switch. This will allow us to setup a private network that
the public and peer1 would have no access to. This could be handy for
database systems, NAS, backup servers, etc which you'd want to keep off
the public network.
Line 57: Line 49:
''We have locking cabinets. Our 7U space is 12 inches tall, 19 inches wide, and 32 inches deep. The cabinets come with front mounting posts, so don't purchase mounting rails. If your servers are extra heavy and need additional support, we offer rackmount shelves for a one time $60 setup fee.'' Proposed Configuration:
Line 59: Line 51:
We will almost definitely need this shelf for our equipment, so add on the $60 fee to our expected costs. VLAN 1. Management VLAN - not used for normal traffic
Line 61: Line 53:
''You get one outlet per 7U, so you'll need to provide a power strip for additional machines. '' VLAN 10. Public VLAN - public/Peer1's network
Line 63: Line 55:
''You can use approx 2 amps of power per 7U.'' VLAN 20. Private VLAN - private subnet for inter-server traffic
Line 65: Line 57:
I asked for more information on the type of power strip: For a starting point i think having ports 1-12 in VLAN 10 and ports 13-24
in VLAN 20 would be best. The VLAN membership of a port can be changed
easily so these would not be set in stone.
Line 67: Line 61:
''You'll just want a basic power strip. A surge protector is not needed. We have failover UPS's that filter the power and protect against power surges. We also have PDU's for each row of cabinets for even more filtering.'' The switch allows for many more VLANs than we'll ever need so if anyone
has a suggestion or need for another VLAN it would be trivial to setup.
Any questions/comments, let me know.
Line 69: Line 65:
According to he.net, the surge protector does not have to be rack-mountable. Anyone have $7 we can borrow? ;-)

== Servers ==
We will be purchasing two servers, which will be configured and sent to he.net for colocation.

=== Desired Features ===
These servers should be as redundant as possible. At this point, we cannot afford to have less than one point of failure in many areas, so we should look for the following features in our new servers:

 * Redundant power supplies.
  * How important is this really? I would argue not because I doubt we will find a colo for our size that will give us power outlets on two independent circuits, and the chance of a power supply failing is low (thus killing both benefits and creating nothing but unneccesary cost). -- ClintonEbadi

  * It's not an issue for the new web server anyway; we need to go 1U and if we use Penguin they don't supply these. Some people find it useful because it's not just a matter of the circuit failing; when you use a Y cable (to plug both power supplies into the same outlet) it prevents your server from going down in case a power supply fails. It is low probability, but getting a new power supply to the machine can incur a lot of down-time that can be avoided by this option which only adds about $200 - $300 to the price of a new server. I believe it also doesn't use much more power to run two power supplies than one once the server is started. -- JustinLeitgeb

 * Hardware RAID.
 * Dual CPU's, AMD seems to be a stronger option than Intel
==== Differences Between the Servers ====
The admin-only server will hopefully be serving an AFS file system, which means that fancier kinds of RAID are justified there. The all-members server can get away with cheaper (and maybe even faster) solutions for local disk access.

JustinLeitgeb thinks that perhaps RAID 1 would work on the all-members server, and either RAID 5 or RAID 10 on the admin server. It should be RAID 10 if we can afford it, or RAID 5 if we're shorter on cash. :)

There may be other factors influencing different configuration choices between the servers.

Perhaps we can get away with SATA RAID 1 on the web server -- hopefully this machine won't be IO-bound, especially if we add enough RAM later. Also, it might benefit us to get a couple of rather lightweight web servers behind a load balancer before really maxing them out, in order to have fewer single points of failure (of course, at this point we would probably also want to have two load-balancers using "heartbeats" so that they couldn't cause a prolonged system failure).

=== Proposed Vendors and Models ===
[http://www.dell.com Dell] Models:

 * Possible web server [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_web_server.pdf (PDF)], based on the Dell PowerEdge 1850 $5071.
 * Possible admin server [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_admin_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_admin_server.pdf (PDF)], based on the Dell PowerEdge 2850 (offers more space for hard disks in our primary file server) $8486.
Note that when I checked Dell dropped something like $1200 off of the price of each server over $4000, so we should expect some significant discounts. Whichever company we plan on going with, we may be able to negotiate lower prices by emphasizing that we may buy more in the future, etc.

[http://www.monarchcomputer.com/Merchant2/merchant.mv?Screen=CTGY&Store_Code=M&Category_Code=allracks Monarch Computer] Models:

[http://www.penguincomputing.com Penguin Computing] Models:

 * Possible web server configuration hardware RAID $3463 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.pdf (PDF)]
 * Possible admin server configuration RAID 10 1U $5321 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_server.pdf (PDF)]
 * Possible admin server configuration, using the 2U server, redundant power supplies, and RAID 5 $4884 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid5_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid5_server.pdf (PDF)]
 * Possible admin server configuration using the 2U server, redundant power supplies, and RAID 10 $5523 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid10_server_2200.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid10_server_2200.pdf (PDF)]
 * Possible web server configuration with SATA RAID 1, budget configuration about $2700 [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_budget_web.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_budget_web.pdf (PDF)]
With the Penguin models, we seem to have to go to the 2U, Altus 2200 in order to get a redundant power supply.

== Ethernet Switch ==
=== Desired Features ===
 * Gigabit
 * 5 ports minimum
 * Managed - so that we can troubleshoot failed NIC's easier
 * Rack-mountable, so that vibration and heat issues are diminished.
 * SNMP monitoring capability
=== Additional Information ===
He.net sent us the following when asked about switch configurations at their site:

''We've got customers using everything from ElCheapoSwitch(tm) to Cisco-grade equipment. The main difference between the two is how much traffic they can deal with, the number of packets they can deal with, and how they can be accesses/monitored. If you're looking at pushing primarily web traffic (<50Mb/s) and do not require any of the more advanced functionality of a managed switch, you could likely just go with a good unmanaged switch. If you were doing higher traffic levels, streaming, or other such traffic which consist of a zillion little packets, especially if it's between your servers, you would be better served by something a bit higher grade.''

And from another support rep at he.net (their responsiveness has been impressive so far!):

''Depends on their needs. If they want to run MRTG, then they need a managed switch. If they just need a switch, a netgear or linksys or d-link will accomplish the job. ''

''Cost differences are greater managed versus non-managed. Non-managed can be 50-$100, whereas managed can start at about $250 and go into the $thousands depending on model and capabilities.''

I also asked he.net about the number of ports we would need. Also got useful information about setting up a VLAN which would be useful. He.net response:

''(2 * n) + 1... where n is the number of machines you colocate. ''

''For 3 servers, your switch would need 7 ports. 3 for the private network, 3 for the public network, and 1 for the uplink to HE. ''

''For better control of your packets and their direction... if you're intending to do a public and private network, you might want to consider purchasing 2 smaller ElCheapoSwitches... or using a slightly more managed switch which support creating a VLAN.''

=== Proposed Models and Vendors ===
==== Vendors ====
[http://newegg.com/ Newegg] has been recommended to several of us.

==== Models ====
===== NETGEAR GS108 10/100/1000Mbps =====
[http://www.newegg.com/Product/Product.asp?Item=N82E16833122111 Netgear GS108 Switch ]: Highly-rated Netgear switch that is not rack-mountable

Price: ($56.99)

(Previous opinion retracted --MichaelOlson)

I don't like this switch for the following reasons:

 1. It is not rack-mountable, meaning that it could raise issues for cooling in the rack, and be more susceptible to shock that could reduce reliability of the switch, or jar patch cables out of the ports.

 1. It is not managed, so we can't track important information about performance and possible NIC failures via SNMP.
Basically, I think that if we're going to pay all of this money for equipment and hosting, we shouldn't put an interconnect with insufficient features in the middle of our architecture. But, I'm not a networking expert, so I would welcome any opinions contrary to this! JustinLeitgeb

===== Level One GSW-1655 10/100/1000Mbps =====
 * ($249.99) Level One 16-port rack-mountable switch [[http://www.newegg.com/Product/Product.asp?Item=N82E16833118021 link ]]
I've never heard of this brand (Level 1?) so I don't trust it. Any reviews? JustinLeitgeb

===== Dell PowerConnect 2716 =====
This is an 8-port gig switch that is web-manageable and lists for $82. I think that we could make it work, especially if I were able to write some scripts to get important data for monitoring in Nagios or rrdtool. I put the [http://www.hcoop.net/~leitgebj/hcoop_servers/pwcnt_27xx_specs.pdf specification sheet] on our web site. This switch is rack-mountable.

===== 3Com® SuperStack® 3 Switch 3812 =====
[http://www.3com.com/products/en_US/detail.jsp?tab=features&pathtype=purchase&sku=3C17401 3Com® SuperStack® 3 Switch 3812] seems to have most of the features that we need, with a bit of room to grow. Prices range from $1000 to $1500 on [http://froogle.google.com Froogle], in my experience [http://www.cdw.com CDW] is a reliable vendor. Perhaps we should make a jump and get the 24 port, which would support our use of an entire rack in the future, if the price difference is small?

===== Nortel Ethernet Routing Switch 3510-24T and Ethernet Switch 380-24T =====
The 3510 will be much more robust than the others listed. List price is around $2200 but I will see how much of a discount I can get as wholesale is much much lower. Here is the product brief,[http://vpit.net/ers3510-brief.pdf ers3510 brief]. I can answer any questions about it as I do work for Nortel. I am not trying to make a sale here, as I am not a salesman. I just work with nortel switches every day as an engineer.

The 380 I can donate, as I own it. It is not brand new but is working. Here is a guide that I was able to find to give anyone interested a more in depth view of it, [http://vpit.net/es380-guide.pdf es380 guide].
-Shaun}}}
Line 172: Line 68:
Input current: 1.5A to 100 AC
Input voltage (rms): 100 to 240 VAC at 47 to 63 Hz
Power consumption: 150 W
Thermal rating: 1000 BTU/hr maximum
Line 177: Line 69:
Both are managed, 10/100/1000 switches.  * Input current: 1.5A to 100 AC
 * Input voltage (rms): 100 to 240 VAC at 47 to 63 Hz
 * Power consumption: 150 W
 * Thermal rating: 1000 BTU/hr maximum
Line 179: Line 74:
== Serial Port ==
=== Desired Features ===
Is this device really necessary? For an extra $1000 - $2000, and utilization of 1U, I am not convinced that this is worth the expense. It seems that in the rare event that our machine is inaccessible from ssh we can use remote hands with he.net and put our resources elsewhere. If someone does think that this is necessary, please put a link to specific models that would be helpful, and a list of reasons why they will come in handy that would justify the additional cost and space in our rack. JustinLeitgeb
== Serial console ==
Line 183: Line 76:
=== Proposed Models and Vendors ===
[http://www.cyclades.com/ Cyclades] was mentioned as one vendor of serial port devices which are linux-friendly.
Some device to simulate local login over the Internet could be a life saver. JustinLeitgeb mentions a special card that Dell sells that would work with his donation.

This page collects information on the hardware we plan to install at a colocation provider as part of our new hosting infrastructure. Some older discussion and similar stuff is on NewSystemHardwareArchive.

System setup

Currently, what we know are the uses for the three machines we will base our infrastructure on. We also know our Abulafia machine configuration, and Justin Leitgeb's to-be-donated server configuration. What we need to come up with, is the ideal setup for the third machine that we will have to buy. So, the machine configurations and intended uses follow:

Justin Leitgeb's donation: Dell PowerEdge 2850

  • Processor: 2 x 2.8 GHz
  • RAM: 4 GB
  • Disks:
    • 4 x 10K Seagate Cheetah SCSI drives, 73GB and

    • 2 x 10K Seagate Cheetah SCSI drives, 36GB
  • Extra: RAID kit, with battery, etc., 256 MB RAID cache, 2 power supplies

Intended use: fileserver and host for all services that don't involve dynamic content provided by non-admins. No user logins.

HCoop's currently-underused machine Abulafia

  • Processor: 1 x 900 MHz
  • RAM: 512 MB
  • Disks: 40 GB RAID 1 (2 x 40 GB 7200 RPM ATA drives)
  • Extra: 3Ware 6400 PCI ATA RAID controller

Intended use: refurbished slightly to serve as a generic shell server and the only machine where usage not strictly related to "Internet hosting" is permitted.

New server, for which we need to come up with hardware specifications

Intended use: dynamic web content and any other Internet services that involve running arbitrary code from members (including custom daemons, etc.)

Switch

We are proceeding under the assumption that we'll use ShaunEmpie's donation (see HardwareDonations), a Nortel (Baystack) 380 switch. He says:

It is not brand new but is working. Here is [http://vpit.net/es380-guide.pdf a guide] that I was able to find to give anyone interested a more in depth view of it.

VLAN Configuration Proposal:

With our new setup, I think it would be best to setup a few different
VLANs for different uses.  For anyone who is unfamiliar with the term, a
VLAN is a virtual lan.  It allows you to have completely separate networks
on the same switch.  This will allow us to setup a private network that
the public and peer1 would have no access to.  This could be handy for
database systems, NAS, backup servers, etc which you'd want to keep off
the public network.

Proposed Configuration:

VLAN 1.     Management VLAN - not used for normal traffic

VLAN 10.    Public VLAN - public/Peer1's network

VLAN 20.    Private VLAN - private subnet for inter-server traffic

For a starting point i think having ports 1-12 in VLAN 10 and ports 13-24
in VLAN 20 would be best.  The VLAN membership of a port can be changed
easily so these would not be set in stone.

The switch allows for many more VLANs than we'll ever need so if anyone
has a suggestion or need for another VLAN it would be trivial to setup.
Any questions/comments, let me know.

-Shaun

ES380 AC Power Specs:

  • Input current: 1.5A to 100 AC
  • Input voltage (rms): 100 to 240 VAC at 47 to 63 Hz
  • Power consumption: 150 W
  • Thermal rating: 1000 BTU/hr maximum

Serial console

Some device to simulate local login over the Internet could be a life saver. JustinLeitgeb mentions a special card that Dell sells that would work with his donation.

Hardware (last edited 2021-04-17 15:58:03 by ClintonEbadi)