welcome: please sign in

Diff for "Hardware"

Differences between revisions 43 and 205 (spanning 162 versions)
Revision 43 as of 2006-07-01 21:30:04
Size: 14096
Editor: ri02-227
Comment:
Revision 205 as of 2018-12-14 04:06:57
Size: 2940
Editor: ClintonEbadi
Comment: bog's been turned off now
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= New System Hardware =
During the HCoop IRC meeting on June 24, 2006, the group decided that it would based it's new system architecture on the following pieces of hardware:
#pragma section-numbers off
Line 4: Line 3:
 * Two robust servers, one that doesn't allow normal user logins, and one that does.
 * One switch to form a LAN between these servers.
 * One serial port device, to facilitate remote access to our servers.
Also, it was mentioned that we should research hardware support contracts from any vendor that will be selling us equipment.
This page collects information on the hardware that we have installed or plan to install as part of HCoop infrastructure.
Line 9: Line 5:
Additionally, group decided that the server that HCoop currently owns, Abulafia, will be brought to he.net for shell service. This will follow a necessary re-load of the OS software at a time to be determined later. <<TableOfContents>>
Line 11: Line 7:
= System setup = = Peer1 =
Line 13: Line 9:
Currently, what we know are the uses for the three machines we will base our infrastructure on. We also know our Abulafia machine configuration, and Justin Leitgeb's to-be-donated server configuration. What we need to come up with, is the ideal setup for the third machine that we will have to buy. So, the machine configurations and intended uses follow: Current pictures of the actual Peer1 cabinet are at [[OnSiteVisits/20130627]].
Line 15: Line 11:
== Justin Leitgeb's donation: Dell PowerEdge 2850 == == fritz ==
Line 17: Line 13:
 * Processor: 2 x 2.8 GHz
 * RAM: 4 GB
 * Disks: 4 x 10K Seagate Cheetah SCSI drives, 73GB '''and''' * 2 x 10K Seagate Cheetah SCSI drives, 36GB
 * Extra: RAID kit, with battery, etc., 256 MB RAID cache, 2 power supplies
FritzInfo
Line 22: Line 15:
'''Intended use: fileserver and host for all services that don't involve dynamic content provided by non-admins. No user logins.'''  * Location: Peer1
 * Model: Dell PowerEdge 2970
 * Processor: 2x Quad Core AMD Opteron™ 2372HE 2.1GHz 4x512K Cache 1Ghz HyperTrnsprt
 * RAM: 24GB (4x2GB + 4x4GB), 800MHz, Dual Ranked
 * 1x6 Backplane for 3.5-inch Hard Drives
 * Integrated SAS/SATA No RAID
 * Disks: system disks 2x 160GB 7.2K RPM Serial ATA 3Gbps 3.5-in HotPlug Hard Drive , + AFS disks 2x 1 TB Western Digital in RAID 1
 * Redundant Power Supply with Dual Cords
 * Lan: Dual Embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC
 * Form factor: 2U
 * OS: Debian Squeeze
 * User logins: no
 * Source: purchase from Dell store, 2x 1 TB disks from Newegg
Line 24: Line 29:
== HCoop's currently-underused machine Abulafia == '''Use: AndrewFileSystem fileserver, MitKerberos kdc, KernelVirtualMachine host'''
Line 26: Line 31:
 * Processor: 1 x 900 MHz
 * RAM: 512 MB
 * Disks: 1 x 40 GB
 * Extra:
= Linode =
Line 31: Line 33:
'''Intended use: refurbished slightly to serve as a generic shell server and the only machine where usage not strictly related to "Internet hosting" is permitted.''' == outpost ==
Line 33: Line 35:
== New server, for which we need to come up with hardware specifications ==  * Location: Linode (hosted at UK data center)
 * Model: Xen VM
 * Processor: 8 cores (1x priority?)
 * RAM: 1024 MB
 * Disk: 40 GB
 * OS: Debian wheezy
 * User logins: no
Line 35: Line 43:
 * Processor:
 * RAM:
 * Disks:
 * Extra:
'''Use: secondary DNS on a different subnet, all tasks requiring remote location.'''
Line 40: Line 45:
'''Intended use: dynamic web content and any other Internet services that involve running arbitrary code from members (including custom daemons, etc.)''' = Digital Ocean =

== gibran ==

 Location:: DigitalOcean NYC3
 Allocated Resources:: 6 vCPU, 16G RAM, 320G storage. Additional block storage volume for OpenAFS `/vicepa`
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Primary AndrewFileSystem fileserver and dbserver, primary MitKerberos KDC, SQL databases, [[ConfigurationManagent|Puppet Master]]
 Details:: ServerGibran

== lovelace ==

 Location :: DigitalOcean NYC3
 Allocated Resources :: 2vCPU, 2G RAM, 60G storage.
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Secondary AndrewFileSystem fileserver and dbserver, secondary MitKerberos KDC
 Details:: ServerLovelace


== marsh ==

 Location:: DigitalOcean NYC3
 Allocated Resources:: 4vCPU, 8G RAM, 160G storage
 Operating System:: Debian Stretch AMD64
 User Logins:: Yes
 Intended Use:: Member logins
 Details:: ServerMarsh

== minsky ==

 Location:: DigitalOcean NYC3
 Allocated Resources:: 2vCPU, 4G RAM, 80G storage
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Mail server and ejabberd server
 Details:: ServerMinsky

== shelob ==

 Location:: DigitalOcean NYC3
 Allocated Resources:: 4vCPU, 8G RAM, 160G storage
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Web Server
 Details:: ServerShelob

= Awaiting setup =

None.

= Awaiting purchase =

New virtual servers, see VirtualizedHosting2018 and ServerMigration2018.

= Decommissioned =

See [[/Decommissioned]] for older machines
Line 43: Line 106:

This page will serve as a forum for collaborative research on the pieces of hardware that we need.

'''Some members are willing to donate pieces of hardware that fit our needs. See the page HardwareDonations for more information. Also please check the power guidelines for he.net as this is something that we will have to work around, either by going to another colocation site or hosting less equipment to start out at he.net.'''

== He.net Power/Rack Guidelines ==
I asked he.net about the type of racks they used, and asked for additional information. They told us to plan on the following:

''We have locking cabinets. Our 7U space is 12 inches tall, 19 inches wide, and 32 inches deep. The cabinets come with front mounting posts, so don't purchase mounting rails. If your servers are extra heavy and need additional support, we offer rackmount shelves for a one time $60 setup fee.''

We will almost definitely need this shelf for our equipment, so add on the $60 fee to our expected costs.

''You get one outlet per 7U, so you'll need to provide a power strip for additional machines. ''

''You can use approx 2 amps of power per 7U.''

I asked for more information on the type of power strip:

''You'll just want a basic power strip. A surge protector is not needed. We have failover UPS's that filter the power and protect against power surges. We also have PDU's for each row of cabinets for even more filtering.''

According to he.net, the surge protector does not have to be rack-mountable. Anyone have $7 we can borrow? ;-)

== Servers ==
We will be purchasing two servers, which will be configured and sent to he.net for colocation.

=== Desired Features ===
These servers should be as redundant as possible. At this point, we cannot afford to have less than one point of failure in many areas, so we should look for the following features in our new servers:

 * Redundant power supplies.
  * How important is this really? I would argue not because I doubt we will find a colo for our size that will give us power outlets on two independent circuits, and the chance of a power supply failing is low (thus killing both benefits and creating nothing but unneccesary cost). -- ClintonEbadi
 * Hardware RAID.
 * Dual CPU's, AMD seems to be a stronger option than Intel
==== Differences Between the Servers ====
The admin-only server will hopefully be serving an AFS file system, which means that fancier kinds of RAID are justified there. The all-members server can get away with cheaper (and maybe even faster) solutions for local disk access.

JustinLeitgeb thinks that perhaps RAID 1 would work on the all-members server, and either RAID 5 or RAID 10 on the admin server. It should be RAID 10 if we can afford it, or RAID 5 if we're shorter on cash. :)

There may be other factors influencing different configuration choices between the servers.

Perhaps we can get away with SATA RAID 1 on the web server -- hopefully this machine won't be IO-bound, especially if we add enough RAM later. Also, it might benefit us to get a couple of rather lightweight web servers behind a load balancer before really maxing them out, in order to have fewer single points of failure (of course, at this point we would probably also want to have two load-balancers using "heartbeats" so that they couldn't cause a prolonged system failure).

=== Proposed Vendors and Models ===
[http://www.dell.com Dell] Models:

 * Possible web server [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_web_server.pdf (PDF)], based on the Dell PowerEdge 1850 $5071.
 * Possible admin server [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_admin_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_admin_server.pdf (PDF)], based on the Dell PowerEdge 2850 (offers more space for hard disks in our primary file server) $8486.
Note that when I checked Dell dropped something like $1200 off of the price of each server over $4000, so we should expect some significant discounts. Whichever company we plan on going with, we may be able to negotiate lower prices by emphasizing that we may buy more in the future, etc.

[http://www.monarchcomputer.com/Merchant2/merchant.mv?Screen=CTGY&Store_Code=M&Category_Code=allracks Monarch Computer] Models:

[http://www.penguincomputing.com Penguin Computing] Models:

 * Possible web server configuration hardware RAID $3463 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.pdf (PDF)]
 * Possible admin server configuration RAID 10 1U $5321 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_server.pdf (PDF)]
 * Possible admin server configuration, using the 2U server, redundant power supplies, and RAID 5 $4884 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid5_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid5_server.pdf (PDF)]
 * Possible admin server configuration using the 2U server, redundant power supplies, and RAID 10 $5523 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid10_server_2200.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid10_server_2200.pdf (PDF)]
 * Possible web server configuration with SATA RAID 1, budget configuration about $2700 [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_budget_web.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_budget_web.pdf (PDF)]
With the Penguin models, we seem to have to go to the 2U, Altus 2200 in order to get a redundant power supply.

== Ethernet Switch ==
=== Desired Features ===
 * Gigabit
 * 5 ports minimum
 * Managed - so that we can troubleshoot failed NIC's easier
 * Rack-mountable, so that vibration and heat issues are diminished.
 * SNMP monitoring capability
=== Additional Information ===
He.net sent us the following when asked about switch configurations at their site:

''We've got customers using everything from ElCheapoSwitch(tm) to Cisco-grade equipment. The main difference between the two is how much traffic they can deal with, the number of packets they can deal with, and how they can be accesses/monitored. If you're looking at pushing primarily web traffic (<50Mb/s) and do not require any of the more advanced functionality of a managed switch, you could likely just go with a good unmanaged switch. If you were doing higher traffic levels, streaming, or other such traffic which consist of a zillion little packets, especially if it's between your servers, you would be better served by something a bit higher grade.''

And from another support rep at he.net (their responsiveness has been impressive so far!):

''Depends on their needs. If they want to run MRTG, then they need a managed switch. If they just need a switch, a netgear or linksys or d-link will accomplish the job. ''

''Cost differences are greater managed versus non-managed. Non-managed can be 50-$100, whereas managed can start at about $250 and go into the $thousands depending on model and capabilities.''

I also asked he.net about the number of ports we would need. Also got useful information about setting up a VLAN which would be useful. He.net response:

''(2 * n) + 1... where n is the number of machines you colocate. ''

''For 3 servers, your switch would need 7 ports. 3 for the private network, 3 for the public network, and 1 for the uplink to HE. ''

''For better control of your packets and their direction... if you're intending to do a public and private network, you might want to consider purchasing 2 smaller ElCheapoSwitches... or using a slightly more managed switch which support creating a VLAN.''

=== Proposed Models and Vendors ===
==== Vendors ====
[http://newegg.com/ Newegg] has been recommended to several of us.

==== Models ====
===== NETGEAR GS108 10/100/1000Mbps =====
[http://www.newegg.com/Product/Product.asp?Item=N82E16833122111 Netgear GS108 Switch ]: Highly-rated Netgear switch that is not rack-mountable

Price: ($56.99)

MichaelOlson thinks that we should go with the Netgear switch. It has been rated as a very reliable product, and is very affordable.

I don't like this switch for the following reasons:

 1. It is not rack-mountable, meaning that it could raise issues for cooling in the rack, and be more susceptible to shock that could reduce reliability of the switch, or jar patch cables out of the ports.

 1. It is not managed, so we can't track important information about performance and possible NIC failures via SNMP.
Basically, I think that if we're going to pay all of this money for equipment and hosting, we shouldn't put an interconnect with insufficient features in the middle of our architecture. But, I'm not a networking expert, so I would welcome any opinions contrary to this! JustinLeitgeb

===== Level One GSW-1655 10/100/1000Mbps =====
 * ($249.99) Level One 16-port rack-mountable switch [[http://www.newegg.com/Product/Product.asp?Item=N82E16833118021 link ]]
I've never heard of this brand (Level 1?) so I don't trust it. Any reviews? JustinLeitgeb

===== Dell PowerConnect 2716 =====
This is an 8-port gig switch that is web-manageable and lists for $82. I think that we could make it work, especially if I were able to write some scripts to get important data for monitoring in Nagios or rrdtool. I put the [http://www.hcoop.net/~leitgebj/hcoop_servers/pwcnt_27xx_specs.pdf specification sheet] on our web site. This switch is rack-mountable.

===== 3Com® SuperStack® 3 Switch 3812 =====
[http://www.3com.com/products/en_US/detail.jsp?tab=features&pathtype=purchase&sku=3C17401 3Com® SuperStack® 3 Switch 3812] seems to have most of the features that we need, with a bit of room to grow. Prices range from $1000 to $1500 on [http://froogle.google.com Froogle], in my experience [http://www.cdw.com CDW] is a reliable vendor. Perhaps we should make a jump and get the 24 port, which would support our use of an entire rack in the future, if the price difference is small?

===== Nortel Ethernet Routing Switch 3510-24T and Ethernet Switch 380-24T =====
The 3510 will be much more robust than the others listed. List price is around $2200 but I will see how much of a discount I can get as wholesale is much much lower. Here is the product brief,[http://vpit.net/ers3510-brief.pdf ers3510 brief]. I can answer any questions about it as I do work for Nortel. I am not trying to make a sale here, as I am not a salesman. I just work with nortel switches every day as an engineer.

The 380 I can donate, as I own it. It is not brand new but is working. Here is a guide that I was able to find to give anyone interested a more in depth view of it, [http://vpit.net/es380-guide.pdf es380 guide].

ES380 AC Power Specs:
Input current: 1.5A to 100 AC
Input voltage (rms): 100 to 240 VAC at 47 to 63 Hz
Power consumption: 150 W
Thermal rating: 1000 BTU/hr maximum

Both are managed, 10/100/1000 switches.

== Serial Port ==
=== Desired Features ===
Is this device really necessary? For an extra $1000 - $2000, and utilization of 1U, I am not convinced that this is worth the expense. It seems that in the rare event that our machine is inaccessible from ssh we can use remote hands with he.net and put our resources elsewhere. If someone does think that this is necessary, please put a link to specific models that would be helpful, and a list of reasons why they will come in handy that would justify the additional cost and space in our rack. JustinLeitgeb

=== Proposed Models and Vendors ===
[http://www.cyclades.com/ Cyclades] was mentioned as one vendor of serial port devices which are linux-friendly.
CategorySystemAdministration

This page collects information on the hardware that we have installed or plan to install as part of HCoop infrastructure.

Peer1

Current pictures of the actual Peer1 cabinet are at OnSiteVisits/20130627.

fritz

FritzInfo

  • Location: Peer1
  • Model: Dell PowerEdge 2970

  • Processor: 2x Quad Core AMD Opteron™ 2372HE 2.1GHz 4x512K Cache 1Ghz HyperTrnsprt

  • RAM: 24GB (4x2GB + 4x4GB), 800MHz, Dual Ranked
  • 1x6 Backplane for 3.5-inch Hard Drives
  • Integrated SAS/SATA No RAID
  • Disks: system disks 2x 160GB 7.2K RPM Serial ATA 3Gbps 3.5-in HotPlug Hard Drive , + AFS disks 2x 1 TB Western Digital in RAID 1

  • Redundant Power Supply with Dual Cords
  • Lan: Dual Embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC

  • Form factor: 2U
  • OS: Debian Squeeze
  • User logins: no
  • Source: purchase from Dell store, 2x 1 TB disks from Newegg

Use: AndrewFileSystem fileserver, MitKerberos kdc, KernelVirtualMachine host

Linode

outpost

  • Location: Linode (hosted at UK data center)
  • Model: Xen VM
  • Processor: 8 cores (1x priority?)
  • RAM: 1024 MB
  • Disk: 40 GB
  • OS: Debian wheezy
  • User logins: no

Use: secondary DNS on a different subnet, all tasks requiring remote location.

Digital Ocean

gibran

Location

DigitalOcean NYC3

Allocated Resources

6 vCPU, 16G RAM, 320G storage. Additional block storage volume for OpenAFS /vicepa

Operating System
Debian Stretch AMD64
User Logins
No
Intended Use

Primary AndrewFileSystem fileserver and dbserver, primary MitKerberos KDC, SQL databases, Puppet Master

Details

ServerGibran

lovelace

Location

DigitalOcean NYC3

Allocated Resources
2vCPU, 2G RAM, 60G storage.
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use

Secondary AndrewFileSystem fileserver and dbserver, secondary MitKerberos KDC

Details

ServerLovelace

marsh

Location

DigitalOcean NYC3

Allocated Resources
4vCPU, 8G RAM, 160G storage
Operating System
Debian Stretch AMD64
User Logins
Yes
Intended Use
Member logins
Details

ServerMarsh

minsky

Location

DigitalOcean NYC3

Allocated Resources
2vCPU, 4G RAM, 80G storage
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use
Mail server and ejabberd server
Details

ServerMinsky

shelob

Location

DigitalOcean NYC3

Allocated Resources
4vCPU, 8G RAM, 160G storage
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use
Web Server
Details

ServerShelob

Awaiting setup

None.

Awaiting purchase

New virtual servers, see VirtualizedHosting2018 and ServerMigration2018.

Decommissioned

See /Decommissioned for older machines


CategorySystemAdministration

Hardware (last edited 2021-04-17 15:58:03 by ClintonEbadi)