To save space below, we'll use the following working names for the different pieces of hardware involved:
Main is the machine hosting most services.
Dynamic is the machine hosting member dynamic web sites and other services where we run arbitrary code written by members.
Shell is the "most anything goes" shell server.
These are the issues that we're dealing with for the first time in our new set-up, meaning that we should pay special attention to them.
- Multiple servers and coordinating their interaction
- A shared file system
- Increasing member base and corresponding system load
- Different public/private networks, thanks to some switch magic
- Serious automated remote back-up service
- Centralized system logins
1. What Debian version do we run on each server?
AdamChlipala suggests stable on Main and testing on Dynamic and Shell because:
- We want our primary services to be as reliable as possible.
- Members will want to use some cutting-edge stuff for running their dynamic web sites and custom daemons, and stable doesn't keep up very well with the cutting edge. On the other hand, unstable just seems too risky.
- If Shell is used as a testing environment for services later pushed to Dynamic, then it should have the same software versions as Dynamic.
Update: We're currently planning stable on Main and Dynamic, since testing too often has catastrophic upgrade failures in practice.
2. What resource limits are imposed on the different servers?
2.1. Decisions that we've agreed on
- We don't need explicit limits on usage of Main's local resources, because only admins will be able to control them.
2.2. Questions to be resolved
- Do we impose ulimits and related stuff on Dynamic?
AdamChlipala says:
- We need some measures in place to prevent runaway processes from crashing everyone's dynamic web sites. The question is, do we use automated measures or do we just monitor closely and intervene manually when needed? A bad runaway process can take the server down quickly, so I think it's necessary to use ulimits and their ilk.
- How do we control resource usage on Shell?
AdamChlipala says:
- I think I'm in favor of no ulimits or similar on Shell, relying on monitoring and manual intervention to deal with runaway processes and other horrors. We've already had some folks unable to use some implementations of non-mainstream programming languages because these implementations aren't able to deal with our resource limits... and, if you know me, you can probably guess that that Just Breaks My Heart!
- Where we do decide to use monitoring and manual intervention, what monitoring tools can best help us do it?
DavorOcelic says:
- I've talked about this multiple times before, and I'm still interested in doing something real in this area. First of all, there's a log parser I've written, which is very similar to Logsurfer (or Logsurfer+ for that matter), but which resolves some of their crucial limitations; we'd definitely turn Main machine into a common loghost, so this would be a good place to deploy this on. Another good thing I have in mind is Nagios, a ping/service/anything monitoring tool. Third tool I have in mind is the excellent Puppet (kind of cfengine new-generation) that we can script to test and fix stuff on our systems.
3. Who can log into which servers?
3.1. Decisions that we've agreed on
- Only admins can log into Main
- Everyone can log into Shell
DavorOcelic says:
- This is a good general rule. For any exceptions, both the usual Unix auth mechanism and LDAP allow great flexibility (per-user list of allowed machines and also per-machine list of allowed users).
3.2. Questions to be resolved
- Can everyone log into Dynamic, too?
AdamChlipala says:
- I think it is important to allow this. My mental model has Shell made deliberately unstable because we don't know how to impose automatic limits that allow all of the stuff that people want to do. I know that a lot of the people involved in this planning aren't particularly interested in using non-mainstream programming languages and other things that conventional hosting providers are never going to support, but for me and several other members this is one of the defining aspects of HCoop. That means that we need to be able to go crazy with Shell, while committing to keeping Dynamic up all the time. If Shell is down, members need to be able to use Dynamic to configure their services. That doesn't mean that they can't use the development-production split model when Shell is up, logging in only there.
4. How are we going to handle the basic logistics of a shared filesystem and logins?
4.1. Decisions that we've agreed on
- We're going to use AFS filesystem and Kerberos. (AFS mandates the use of Kerberos).
- We're going to use LDAP for logins. (Can play together with AFS and Kerberos, no worries).
4.2. Questions to be resolved
Everything else!
5. How are we going to charge (monetarily or just to have a sense of who is using what) members accurately for their disk usage?
There are a lot of issues here. We provide a number of shared services whose default models create files on the behalf of members but that are (by default) owned by a single UNIX user. Examples include PostgreSQL and MySQL databases, virtual mailboxes, Mailman mailing lists, and domtool configuration files. Any of these can grow so large as to use up all disk space on a volume, through either malicious action or accidental runaway processes.
Right now we use this gimpy scheme of group quotas on /home, storing all of these files on that partition with group ownership telling which member is responsible for them. I think AFS provides a nicer way of doing this. With the way we do it now, we are constantly fighting the behavior of the out-of-the-box Debian packages to set permissions differently than how we need them to be. With AFS, I think we can separate permissions from locations.
6. Off-site file back-up services
6.1. Questions to be resolved
7. DNS
7.1. Decisions that we've agreed on
Update: Scrap that! We're using BIND on Main and Dynamic, since it's so much better supported throughout the 'net, makes master/slave configurations easier, etc.. In the future, we want to expand to include a tertiary DNS server in a different geographic location and on an entirely different network.
7.2. Questions to be resolved
- How do we arrange redundant DNS infrastructure?
JustinLeitgeb says:
- For now, I think we can just put our backup DNS server on either the shell or web machine at Peer 1, depending on how we finally set things up. We will have to configure this with domtool or it's replacement. I don't really see any other options here, am I missing something?
7.3. References to how we do things now
DnsConfiguration, DomainRegistration
8. FTP
8.1. Decisions that we've agreed on
- Run an FTP daemon on Main
- Only allow encrypted authentication methods
- Only allow users on a white-list to use FTP; they should be using SCP if possible
8.2. References to how we do things now
FtpConfiguration, FileTransfer
9. HTTP
9.1. Decisions that we've agreed on
- Using Apache 2
- Running all official/administrative HCoop web sites on Main
- Running all member dynamic web sites on Dynamic
9.2. Questions to be resolved
- Do we completely separate adminstrative web sites from the rest, or do we allow any member static web site to be served by Main?
DavorOcelic says:
- Well. I think we don't have many administrative web sites (nor the ones we have are used heavy enough) to justify complete separation. It should be OK to run static web sites from Main, I believe. We could create default web spaces for users, like ~/public_html/ served from Dynamic, and ~/static_html/ served from Main, or something like that. (Please give more input on this).
9.3. References to how we do things now
UserWebsites, DynamicWebSites, VirtualHostConfiguration
10. IMAP/POP
10.1. Decisions that we've agreed on
- Running the primary IMAP/POP daemons on Main
- Running both SSL and normal versions, where the normal versions can only be used over the local network
10.2. Questions to be resolved
- Do we keep using Courier IMAP or do we switch to something like Cyrus?
10.3. References to how we do things now
UsingEmail, EmailConfiguration
11. Jabber
11.1. Decisions that we've agreed on
- Run the same thing we're running now, on Main
11.2. Questions to be resolved
11.3. References to how we do things now
JabberServer
12. Mailing lists
12.1. Decisions that we've agreed on
- Using the Mailman software
- Running the daemon on Main
12.2. Questions to be resolved
- How/where do we store mailing list data so that it is appropriately charged towards a member's storage quota?
12.3. References to how we do things now
MailingListConfiguration
13. Relational database servers
13.1. Decisions that we've agreed on
- Running PostgreSQL and MySQL servers on Main
13.2. Questions to be resolved
- Are we satisfied with the latest versions from Debian stable, or do we want to do something special?
Do remote PostgreSQL authentication (from Dynamic, etc.) via the ident method? DavorOcelic thinks it's OK.
13.3. References to how we do things now
UsingDatabases
14. SMTP
14.1. Decisions that we've agreed on
- Using Exim 4
- Running the primary SMTP daemon on Main
14.2. Questions to be resolved
- Run secondary MX on Dynamic or elsewhere?
14.3. References to how we do things now
UsingEmail, EmailConfiguration
15. Spam detection
15.1. Decisions that we've agreed on
15.2. References to how we do things now
UsingEmail, SpamAssassin, FeedingSpamAssassin, SpamAssassinAdmin
16. SSH
16.1. Decisions that we've agreed on
- Use the standard SSH daemon in Debian
- Run it on all of our servers, with varying access permissions based on the shared user list
DavorOcelic says:
- Do we need ssh on Main too, if we've got a serial console?
16.2. References to how we do things now
SshConfiguration
17. SIP Redirection
Do we also want to add the service of SIP redirection? I think this would go along very well with Clinton's suggestion of allowing people to have jabber accounts with their own domain. This way someone could have have their email, jabber and sip addresses all consolidated. A sip redirection server would use next to no bandwidth. All it would do is when a call comes in, give it another address the user can be found on. For example when someone tries to call user1@userdomain.com , the server would spit out a user defined address such as a gizmo or fwd account name and the call would continue on to that seamlessly. - ShaunEmpie
Everyone's favorite spiffy system for letting legions of users manage the same daemons securely.
AdamChlipala says:
- I would like to rewrite this completely, for reasons including: From a software engineering perspective, the implementation is not so nice. There is no support for configuring multiple machines from the same configuration file source. Scalability with the increasing amount of configuration is not so hot. The current configuration scheme encourages copying-and-pasting, which makes it hard to make sweeping changes to our suggested configuration base.
JustinLeitgeb says:
- If we're doing this, let's think about storing configuration information in a database. It seems that it should scale better, and it would certainly be easier to write programs for users to configure domains via a web interface. I'm also thinking about writing a tool to set up a host with a dynamic IP on the internet (like what dyndns.org provides). For this to occur, we basically need to factor in the ability for fairly frequent, small changes to DNS zones without completely reloading the server. Also we need to be able to configure the TTL on host records (this may already be possible in domtool, I haven't checked). If the new domtool is written in Perl, I will be able to make software contributions, otherwise I probably won't have time to learn a new language in the next few years.
AdamChlipala says:
- My conception of the optimal configuration tool makes every configuration file a program, with textual structure that maps very poorly to a relational database, so I am still strongly against the idea of SQL-based configuration.
- domtool already supports everything needed for dynamic DNS, including setting TTL, as someone already requested support for doing that himself.
- I won't be involved with any Perl development.
JustinLeitgeb says:
- OK, I understand where you're coming from if you want the configuration files to be programs. I agree that it will be a stronger system that way.
18.1. References to how we do things now
DomainTool
19. Portal
19.1. Decisions that we've agreed on
- Keep doing the same as now, running on Main
19.2. References to how we do things now
The portal
20. Web e-mail client
20.1. Decisions that we've agreed on
20.2. References to how we do things now
SquirrelMail
21. Webmin/Usermin
21.1. Decisions that we've agreed on
- Keep doing the same as now, running on Main
21.2. References to how we do things now
Usermin
22. Wiki
22.1. Decisions that we've agreed on
- Start from the same data as our current wiki
- Host the wiki on Main
Keep using MoinMoin
22.2. Questions to be resolved
- Upgrade the wiki to the latest release, even if there is no Debian package for it.
MichaelOlson says:
I want to upgrade the Moin software to the latest release. The main reason for this is that the UserPreferences page is broken in the current version, in that it has no Mail me my account data button, in spite of the instructions on that page. This seems to be fixed on the official Moin wiki, so it is most likely fixed in the latest release.
- The idea is for me to start by upgrading my LUG's wiki instance. If no unsolvable problems are encountered, then upgrade HCoop's wiki instance as well. If no up-to-date Debian package is found (and there wasn't one, last time I checked), I could either:
(a) make a Debian package, using the Debian patches against their moin package as a reference, or
(b) backup the Debian additions (site-wide wiki farm settings), remove the moiin Debian package, and install it from source.
22.3. References to how we do things now
This wiki
Here are the security issues we need to worry about, sorting by resource categories of varying abstraction levels. What we mostly deal with here is avoiding negative consequences of actions by members with legitimate access to our servers.
23. CPU time
We haven't really encountered any trouble with this literal resource yet. However, potential problems come in when we're talking about user dynamic web site programs called by a shared Apache daemon. Apache allocates a fixed set of child processes, and each pending dynamic web site program takes up one child process for the duration of its life. Enough infinite-looping or slow CGI scripts can bring Apache down for everyone.
23.1. Current remedies
As per ResourceLimits, we use patched suexec programs to limit dynamic page generation programs to 10 seconds of running time. We also have a time-out for mod_proxy accesses, which we provide to allow members to implement dynamic web sites through their own daemons that the main Apache proxies.
24. Disk usage
We can't let one person use up all of the disk space, now can we?
24.1. Current remedies
We use group quotas so that members can be charged for files that they don't own. This is still hackish and allows some unintended behaviors. DaemonFileSecurity has more detail.
25. Network bandwidth
We don't do a thing to limit this now, since our current host provides significantly more bandwidth than we need.
25.1. Questions to be resolved
- Should we start doing anything beyond monitoring?
26. Network connection privileges
It's good to follow least privilege in who is allowed to connect to/listen on which ports.
26.1. Current remedies
We have a firewall system in place now. It uses a custom tool documented partially on FirewallRules.
27. Number of processes
Fork bombs are no fun, and many resource limiting schemes are per-process and so require a limit on process creation to be effective.
27.1. Current remedies
As per ResourceLimits, we use the nproc ulimit.
28. RAM
This is probably the most surprising thing for novices to the hosting co-op planning biz. If you would classify yourself as such, then I bet you would leave RAM off your list of resources that need to be protected with explicit security measures!
Nonetheless, it may just be the most critical resource to control. In our experiences back when everything ran on Abulafia, the most common cause of system outage was some user running an out-of-control process that allocated all available memory, causing other processes to drop dead left and right as memory allocation calls failed. We're letting people run their own daemons 24/7, so this just can't be ignored.
28.1. Current remedies
As per ResourceLimits, we use the as ulimit to put a cap on how much virtual memory a process can allocate.
CategorySystemAdministration CategoryHistorical