welcome: please sign in

Diff for "Hardware"

Differences between revisions 41 and 211 (spanning 170 versions)
Revision 41 as of 2006-06-30 15:18:33
Size: 12454
Editor: ShaunEmpie
Comment: added es380 power consumption
Revision 211 as of 2020-03-07 02:50:03
Size: 2442
Editor: ClintonEbadi
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= New System Hardware =
During the HCoop IRC meeting on June 24, 2006, the group decided that it would based it's new system architecture on the following pieces of hardware:
#pragma section-numbers off
Line 4: Line 3:
 * Two robust servers, one that doesn't allow normal user logins, and one that does.
 * One switch to form a LAN between these servers.
 * One serial port device, to facilitate remote access to our servers.
Also, it was mentioned that we should research hardware support contracts from any vendor that will be selling us equipment.
This page collects information on the hardware that we have installed or plan to install as part of HCoop infrastructure.
Line 9: Line 5:
Additionally, group decided that the server that HCoop currently owns, Abulafia, will be brought to he.net for shell service. This will follow a necessary re-load of the OS software at a time to be determined later. <<TableOfContents>>
Line 11: Line 7:
This page will serve as a forum for collaborative research on the pieces of hardware that we need. = Linode =
Line 13: Line 9:
'''Some members are willing to donate pieces of hardware that fit our needs. See the page HardwareDonations for more information. Also please check the power guidelines for he.net as this is something that we will have to work around, either by going to another colocation site or hosting less equipment to start out at he.net.''' == outpost ==
Line 15: Line 11:
== He.net Power/Rack Guidelines ==
I asked he.net about the type of racks they used, and asked for additional information. They told us to plan on the following:
 Location:: Linode London
 Allocated Resources:: 1 vCPU, 2G RAM, 50G storage.
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Secondar AndrewFileSystem server, DNS
 Details:: ServerOutpost
Line 18: Line 18:
''We have locking cabinets. Our 7U space is 12 inches tall, 19 inches wide, and 32 inches deep. The cabinets come with front mounting posts, so don't purchase mounting rails. If your servers are extra heavy and need additional support, we offer rackmount shelves for a one time $60 setup fee.'' '''Use: secondary DNS on a different subnet, all tasks requiring remote location.'''
Line 20: Line 20:
We will almost definitely need this shelf for our equipment, so add on the $60 fee to our expected costs. = Digital Ocean =
Line 22: Line 22:
''You get one outlet per 7U, so you'll need to provide a power strip for additional machines. '' == busted ==
Line 24: Line 24:
''You can use approx 2 amps of power per 7U.''  Location:: DigitalOcean NYC3
 Allocated Resources:: 1 vCPU, 2G RAM, 50G storage.
 Operating System:: Debian Buster AMD64
 User Logins:: No
 Intended Use:: Porting forward HCoop services to Debian 10 (Buster) from Debian 9 (Stretch), and potentially as a standalone DomTool node / experimental server for future feature development.
 Details:: ServerBusted
Line 26: Line 31:
I asked for more information on the type of power strip: == gibran ==
Line 28: Line 33:
''You'll just want a basic power strip. A surge protector is not needed. We have failover UPS's that filter the power and protect against power surges. We also have PDU's for each row of cabinets for even more filtering.''  Location:: DigitalOcean NYC3
 Allocated Resources:: 6 vCPU, 16G RAM, 320G storage. Additional block storage volume for OpenAFS `/vicepa`
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Primary AndrewFileSystem fileserver and dbserver, primary MitKerberos KDC, SQL databases, [[ConfigurationManagent|Puppet Master]]
 Details:: ServerGibran
Line 30: Line 40:
According to he.net, the surge protector does not have to be rack-mountable. Anyone have $7 we can borrow? ;-) == lovelace ==
Line 32: Line 42:
== Servers ==
We will be purchasing two servers, which will be configured and sent to he.net for colocation.
 Location :: DigitalOcean NYC3
 Allocated Resources :: 2vCPU, 2G RAM, 60G storage.
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Secondary AndrewFileSystem fileserver and dbserver, secondary MitKerberos KDC
 Details:: ServerLovelace
Line 35: Line 49:
=== Desired Features ===
These servers should be as redundant as possible. At this point, we cannot afford to have less than one point of failure in many areas, so we should look for the following features in our new servers:
Line 38: Line 50:
 * Redundant power supplies.
  * How important is this really? -- ClintonEbadi
 * Hardware RAID.
 * Dual CPU's, AMD seems to be a stronger option than Intel
==== Differences Between the Servers ====
The admin-only server will hopefully be serving an AFS file system, which means that fancier kinds of RAID are justified there. The all-members server can get away with cheaper (and maybe even faster) solutions for local disk access.
== marsh ==
Line 45: Line 52:
JustinLeitgeb thinks that perhaps RAID 1 would work on the all-members server, and either RAID 5 or RAID 10 on the admin server. It should be RAID 10 if we can afford it, or RAID 5 if we're shorter on cash. :)  Location:: DigitalOcean NYC3
 Allocated Resources:: 4vCPU, 8G RAM, 160G storage
 Operating System:: Debian Stretch AMD64
 User Logins:: Yes
 Intended Use:: Member logins
 Details:: ServerMarsh
Line 47: Line 59:
There may be other factors influencing different configuration choices between the servers. == minsky ==
Line 49: Line 61:
Perhaps we can get away with SATA RAID 1 on the web server -- hopefully this machine won't be IO-bound, especially if we add enough RAM later. Also, it might benefit us to get a couple of rather lightweight web servers behind a load balancer before really maxing them out, in order to have fewer single points of failure (of course, at this point we would probably also want to have two load-balancers using "heartbeats" so that they couldn't cause a prolonged system failure).  Location:: DigitalOcean NYC3
 Allocated Resources:: 2vCPU, 4G RAM, 80G storage
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Mail server and ejabberd server
 Details:: ServerMinsky
Line 51: Line 68:
=== Proposed Vendors and Models ===
[http://www.dell.com Dell] Models:
== shelob ==
Line 54: Line 70:
 * Possible web server [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_web_server.pdf (PDF)], based on the Dell PowerEdge 1850 $5071.
 * Possible admin server [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_admin_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/dell_admin_server.pdf (PDF)], based on the Dell PowerEdge 2850 (offers more space for hard disks in our primary file server) $8486.
Note that when I checked Dell dropped something like $1200 off of the price of each server over $4000, so we should expect some significant discounts. Whichever company we plan on going with, we may be able to negotiate lower prices by emphasizing that we may buy more in the future, etc.
 Location:: DigitalOcean NYC3
 Allocated Resources:: 4vCPU, 8G RAM, 160G storage
 Operating System:: Debian Stretch AMD64
 User Logins:: No
 Intended Use:: Web Server
 Details:: ServerShelob
Line 58: Line 77:
[http://www.monarchcomputer.com/Merchant2/merchant.mv?Screen=CTGY&Store_Code=M&Category_Code=allracks Monarch Computer] Models: = Awaiting setup =
Line 60: Line 79:
[http://www.penguincomputing.com Penguin Computing] Models: None.
Line 62: Line 81:
 * Possible web server configuration hardware RAID $3463 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_web_server.pdf (PDF)]
 * Possible admin server configuration RAID 10 1U $5321 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_server.pdf (PDF)]
 * Possible admin server configuration, using the 2U server, redundant power supplies, and RAID 5 $4884 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid5_server.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid5_server.pdf (PDF)]
 * Possible admin server configuration using the 2U server, redundant power supplies, and RAID 10 $5523 [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid10_server_2200.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/penguin_admin_raid10_server_2200.pdf (PDF)]
 * Possible web server configuration with SATA RAID 1, budget configuration about $2700 [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_budget_web.ps (postscript)] [http://www.hcoop.net/~leitgebj/hcoop_servers/altus_budget_web.pdf (PDF)]
With the Penguin models, we seem to have to go to the 2U, Altus 2200 in order to get a redundant power supply.
= Awaiting purchase =
Line 69: Line 83:
== Ethernet Switch ==
=== Desired Features ===
 * Gigabit
 * 5 ports minimum
 * Managed - so that we can troubleshoot failed NIC's easier
 * Rack-mountable, so that vibration and heat issues are diminished.
 * SNMP monitoring capability
=== Additional Information ===
He.net sent us the following when asked about switch configurations at their site:
None.
Line 79: Line 85:
''We've got customers using everything from ElCheapoSwitch(tm) to Cisco-grade equipment. The main difference between the two is how much traffic they can deal with, the number of packets they can deal with, and how they can be accesses/monitored. If you're looking at pushing primarily web traffic (<50Mb/s) and do not require any of the more advanced functionality of a managed switch, you could likely just go with a good unmanaged switch. If you were doing higher traffic levels, streaming, or other such traffic which consist of a zillion little packets, especially if it's between your servers, you would be better served by something a bit higher grade.'' = Decommissioned =
Line 81: Line 87:
And from another support rep at he.net (their responsiveness has been impressive so far!): See [[/Decommissioned]] for older machines
Line 83: Line 89:
''Depends on their needs. If they want to run MRTG, then they need a managed switch. If they just need a switch, a netgear or linksys or d-link will accomplish the job. ''

''Cost differences are greater managed versus non-managed. Non-managed can be 50-$100, whereas managed can start at about $250 and go into the $thousands depending on model and capabilities.''

I also asked he.net about the number of ports we would need. Also got useful information about setting up a VLAN which would be useful. He.net response:

''(2 * n) + 1... where n is the number of machines you colocate. ''

''For 3 servers, your switch would need 7 ports. 3 for the private network, 3 for the public network, and 1 for the uplink to HE. ''

''For better control of your packets and their direction... if you're intending to do a public and private network, you might want to consider purchasing 2 smaller ElCheapoSwitches... or using a slightly more managed switch which support creating a VLAN.''

=== Proposed Models and Vendors ===
==== Vendors ====
[http://newegg.com/ Newegg] has been recommended to several of us.

==== Models ====
===== NETGEAR GS108 10/100/1000Mbps =====
[http://www.newegg.com/Product/Product.asp?Item=N82E16833122111 Netgear GS108 Switch ]: Highly-rated Netgear switch that is not rack-mountable

Price: ($56.99)

MichaelOlson thinks that we should go with the Netgear switch. It has been rated as a very reliable product, and is very affordable.

I don't like this switch for the following reasons:

 1. It is not rack-mountable, meaning that it could raise issues for cooling in the rack, and be more susceptible to shock that could reduce reliability of the switch, or jar patch cables out of the ports.

 1. It is not managed, so we can't track important information about performance and possible NIC failures via SNMP.
Basically, I think that if we're going to pay all of this money for equipment and hosting, we shouldn't put an interconnect with insufficient features in the middle of our architecture. But, I'm not a networking expert, so I would welcome any opinions contrary to this! JustinLeitgeb

===== Level One GSW-1655 10/100/1000Mbps =====
 * ($249.99) Level One 16-port rack-mountable switch [[http://www.newegg.com/Product/Product.asp?Item=N82E16833118021 link ]]
I've never heard of this brand (Level 1?) so I don't trust it. Any reviews? JustinLeitgeb

===== Dell PowerConnect 2716 =====
This is an 8-port gig switch that is web-manageable and lists for $82. I think that we could make it work, especially if I were able to write some scripts to get important data for monitoring in Nagios or rrdtool. I put the [http://www.hcoop.net/~leitgebj/hcoop_servers/pwcnt_27xx_specs.pdf specification sheet] on our web site. This switch is rack-mountable.

===== 3Com® SuperStack® 3 Switch 3812 =====
[http://www.3com.com/products/en_US/detail.jsp?tab=features&pathtype=purchase&sku=3C17401 3Com® SuperStack® 3 Switch 3812] seems to have most of the features that we need, with a bit of room to grow. Prices range from $1000 to $1500 on [http://froogle.google.com Froogle], in my experience [http://www.cdw.com CDW] is a reliable vendor. Perhaps we should make a jump and get the 24 port, which would support our use of an entire rack in the future, if the price difference is small?

===== Nortel Ethernet Routing Switch 3510-24T and Ethernet Switch 380-24T =====
The 3510 will be much more robust than the others listed. List price is around $2200 but I will see how much of a discount I can get as wholesale is much much lower. Here is the product brief,[http://vpit.net/ers3510-brief.pdf ers3510 brief]. I can answer any questions about it as I do work for Nortel. I am not trying to make a sale here, as I am not a salesman. I just work with nortel switches every day as an engineer.

The 380 I can donate, as I own it. It is not brand new but is working. Here is a guide that I was able to find to give anyone interested a more in depth view of it, [http://vpit.net/es380-guide.pdf es380 guide].

ES380 AC Power Specs:
Input current: 1.5A to 100 AC
Input voltage (rms): 100 to 240 VAC at 47 to 63 Hz
Power consumption: 150 W
Thermal rating: 1000 BTU/hr maximum

Both are managed, 10/100/1000 switches.

== Serial Port ==
=== Desired Features ===
Is this device really necessary? For an extra $1000 - $2000, and utilization of 1U, I am not convinced that this is worth the expense. It seems that in the rare event that our machine is inaccessible from ssh we can use remote hands with he.net and put our resources elsewhere. If someone does think that this is necessary, please put a link to specific models that would be helpful, and a list of reasons why they will come in handy that would justify the additional cost and space in our rack. JustinLeitgeb

=== Proposed Models and Vendors ===
[http://www.cyclades.com/ Cyclades] was mentioned as one vendor of serial port devices which are linux-friendly.
----
CategorySystemAdministration

This page collects information on the hardware that we have installed or plan to install as part of HCoop infrastructure.

Linode

outpost

Location
Linode London
Allocated Resources
1 vCPU, 2G RAM, 50G storage.
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use

Secondar AndrewFileSystem server, DNS

Details

ServerOutpost

Use: secondary DNS on a different subnet, all tasks requiring remote location.

Digital Ocean

busted

Location

DigitalOcean NYC3

Allocated Resources
1 vCPU, 2G RAM, 50G storage.
Operating System
Debian Buster AMD64
User Logins
No
Intended Use

Porting forward HCoop services to Debian 10 (Buster) from Debian 9 (Stretch), and potentially as a standalone DomTool node / experimental server for future feature development.

Details

ServerBusted

gibran

Location

DigitalOcean NYC3

Allocated Resources

6 vCPU, 16G RAM, 320G storage. Additional block storage volume for OpenAFS /vicepa

Operating System
Debian Stretch AMD64
User Logins
No
Intended Use

Primary AndrewFileSystem fileserver and dbserver, primary MitKerberos KDC, SQL databases, Puppet Master

Details

ServerGibran

lovelace

Location

DigitalOcean NYC3

Allocated Resources
2vCPU, 2G RAM, 60G storage.
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use

Secondary AndrewFileSystem fileserver and dbserver, secondary MitKerberos KDC

Details

ServerLovelace

marsh

Location

DigitalOcean NYC3

Allocated Resources
4vCPU, 8G RAM, 160G storage
Operating System
Debian Stretch AMD64
User Logins
Yes
Intended Use
Member logins
Details

ServerMarsh

minsky

Location

DigitalOcean NYC3

Allocated Resources
2vCPU, 4G RAM, 80G storage
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use
Mail server and ejabberd server
Details

ServerMinsky

shelob

Location

DigitalOcean NYC3

Allocated Resources
4vCPU, 8G RAM, 160G storage
Operating System
Debian Stretch AMD64
User Logins
No
Intended Use
Web Server
Details

ServerShelob

Awaiting setup

None.

Awaiting purchase

None.

Decommissioned

See /Decommissioned for older machines


CategorySystemAdministration

Hardware (last edited 2021-04-17 15:58:03 by ClintonEbadi)