welcome: please sign in

Diff for "AndrewFileSystem"

Differences between revisions 40 and 41
Revision 40 as of 2009-09-08 15:34:44
Size: 4868
Editor: ClintonEbadi
Comment: fix broken syntax that caused most of the page not to render
Revision 41 as of 2010-01-06 02:55:48
Size: 5394
Editor: DavorOcelic
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
This page explains some nuances of the Andrew File System (AFS), which we use to serve home directories. In 2007, at the time of switch to Peer1 colocation provider and expanding our infrastructure, we decided to use AFS (OpenAFS) as the basis for our technical setup.

AFS is, strictly speaking, just a distributed filesystem, but it mandates usage of Kerberos and has a whole set of its own rules. Since we have decided to keep all our data files in AFS, the config and init scripts of most (if not all) services had to be modified to support AFS.

We have configured all traditional Unix services, DomTool, Exim and even MySQL and PostgreSQL to use AFS, and where possible, services fork processes under corresponding user privileges and obtain users' AFS identity. (Although as of Jan 2010, databases are no longer in AFS but on the usual Ext3 partitions. This was needed to solve database performance and reliability issues, and was made possible with the purchase of the new machine, so one of the old machines could be rearranged to perform new tasks).
Line 13: Line 17:
The `/afs` tree contains shared filesystems. `/afs/hcoop.net` (symlinked from `/afs/hcoop` as well) is our piece of the AFS-o-sphere. Subdirectories include: The `/afs` tree contains shared filesystems. `/afs/hcoop.net` (symlinked from `/afs/hcoop` as well) is our piece of the AFS-o-sphere, but is not (yet) listed in the global CellServDB.

Subdirectories include:
Line 18: Line 24:
 * `/afs/hcoop.net/common/databases`, databases (no longer used)
Line 19: Line 26:
= Connecting to AFS from an HCoop server = = Connecting to AFS =
Line 21: Line 28:
I found this handy summary of the commands that must be run:
  http://www.eos.ncsu.edu/remoteaccess/LinuxOpenAfs/kreset_debian/kreset
Upon login (which goes through PAM krb5 and afs modules), Kerberos ticket and AFS token should automatically be initialized for the user, and they should find themselves in their home directory.
Line 24: Line 30:
On our servers, it seems sufficient to run: Users wishing to manually authenticate can run '''kinit''' and '''aklog''' (see manpages for all options) to obtain ticket and token, and '''klist -5f''' and '''tokens''' to verify.
Line 26: Line 32:
{{{
kinit
aklog
}}}
Also, AFS is a distributed filesystem and allows access from users' workstations. Using '''kinit''' and '''aklog''' cmdline switches, one can remotely authenticate to cell HCOOP.NET and then directly SSH to HCoop without a password, or better yet, access their home directory from their local workstation, in `/afs/hcoop.net/user/U/US/$USERNAME`.
Line 31: Line 34:
These should be run automatically if you log in normally, but admins who manually `kinit` to different users (for
the purpose of testing access permissions most often), need to of course run both `kinit; aklog` to completely
switch to a target user.
= Users and tokens =
Line 35: Line 36:
= The kadmin shell = Every HCoop user "owns" a Kerberos principal and AFS PTS entry named after their username. This "account" is intended to be used only interactively (people using it).
Line 37: Line 38:
All Kerberos administration commands are run from a special shell, called Kadmin. There are two variants of Kadmin:
kadmin is the usual, remote version of the command which can be run on any machine; kadmin.local is the "local"
version which can only be ran on the AFS fileserver (deleuze).
For each, there's also another principal named "$USER/daemon" in Kerberos (and "$USER.daemon" in AFS). This principal's key is exported into file `/etc/keytabs/user.daemon/$USER` on all relevant machines and is chown-ed to the user's Unix account. This allows users' batch/noninteractive scripts to authenticate to Krb/AFS using password from a file.
Line 41: Line 40:
Invoke kadmin.local as `sudo kadmin.local -p YOURUSERNAME_admin`. It is good to include "-p YOURUSERNAME_admin", or
kadmin will "authenticate" as the first user it finds in the ticket cache, which may or may not be the username you
expected. All the administrative commands would work anyway (since you ran kadmin.local), but an incorrect principal
name would make various statistics incorrect (like name of principal who was adding/changing entries in the DB).
This also allow for more fine-grained control as permissions need to be explicitly granted to $USER.daemon in order to do anything with the data. So even if the service running under certain Unix user (or root!) is compromised, the attacker's choice of action will be minimal.
Line 46: Line 42:
To invoke kadmin, use `kadmin -p YOURUSERNAME_admin`. In normal course of action, kadmin asks for a password. This is
impractical for automated scripts. As usual, instead of a password, you can also pass a keytab file. Our keytabs are
saved in /etc/keytabs/ on each system, and they are readable by group 'wheel'. So administrators should be able
to invoke 'kadmin' (use control shell) or kinit/k5start (impersonate any user) by supplying target user's key from
a keytab, such as `kadmin -p domtool -k -t /etc/keytabs/domtool` .
Furthermore, user tickets and tokens expire periodically. One has to either invoke kinit/aklog again, or set up tools such as '''k5start''' to perform automatic renewal.
Line 52: Line 44:
= Creating a new user = = Privileges =
Line 54: Line 46:
We follow the convention that Kerberos users for daemons are named `$DAEMON`, where `$DAEMON` is the name of the daemon (for instance, the name of system user it runs as, or the name of its `/etc/init.d` file). ''Some daemons
currently use DAEMON/HOST scheme, but this will be changed later and is not to be used for any new principals
you create''.
AFS uses ACLs, a more elaborate permissions model than the traditional Unix rwx modes. (Although the benefit is not that great any more, with the availability of POSIX ACLs for Linux).
Line 58: Line 48:
To add the Kerberos principal for a daemon, run this in kadmin:{{{
addprinc -randkey -policy service $DAEMON}}}
However, there are a few intrinsic AFS properties that must be mentioned:
Line 61: Line 50:
AFS users exist separately from Kerberos principals. To add the AFS user for a daemon to which you want to assign UID `$UID`, run:{{{
pts createuser $DAEMON}}}
 1. AFS ACLs are per directory. All contained files inherit directory's ACL. (A subdirectory can define its own ACLs, of course).
 1. When a subdirectory is created, it inherits ACL of its parent. (Much better approach than as with Unix filesystems where you need +s on the immediate parent directory to get this behavior).
 1. It's possible to make user files unreadable to an attacker, even if they break in the "root" account on the machine
Line 64: Line 54:
"keytab" files smooth the way to running daemons that run with AFS privileges. An access-protected local file contains a user's credentials, and daemons read these files on starting up in order to authenticate. == Permissions and quota ==
Line 66: Line 56:
To create a keytab for a daemon, run this in kadmin:{{{
ktadd -k /etc/keytabs/$DAEMON -e "des3-hmac-sha1:normal rc4-hmac:normal" $DAEMON
chown $DAEMON:wheel /etc/keytabs/$DAEMON
chmod 440 /etc/keytabs/$DAEMON
}}}
To give $USER.daemon the actual permission in AFS space, for most common actions, `fs setacl DIR $USER.daemon read` or `write`
are good. All subdirectories that get created within that toplevel directory for which you give permissions, will, as said,
inherit all the permissions, and this is what you want in 99% of the cases.
Line 72: Line 60:
In the example above, only one key (of 4 or 5 created) is exported for a user. Sometimes it might be desirable to
only export a specific key into a keytab file, but we generally just omit the `-e KEY_TYPE` parameter and export
all keys to the keytab file.

You can view keys stored in a keytab by doing `sudo klist -k /etc/keytabs/KEYTAB_FILE`.

To make daemons properly kinit/aklog as the user you created for them, use ``k5start`` command. Many examples
of how to use it are already found in our /etc/init.d/ scripts. Important options include `-U` (which kinits as
the first principal found in the keytab file, without the need to explicitly name a principal), -f (which specifies
the keytab file to kinit from), and -K MINUTES (which re-news the ticket after MINUTES, so that daemons can run
for long periods of time).

To give $DAEMON the actual permission in AFS space, for most common actions, `fs setacl DIR $DAEMON read` or `write`
are good. All subdirectories that get created within the toplevel directory for which you give permissions, will
inherit all the permissions.

= Listing and setting quotas =
== Listing and setting quotas ==
Line 94: Line 66:
Line 98: Line 69:

= Problems =

HCoop members have so far reported the following problems with AFS:

 * They can not access files (Permission denied). This happens when their ticket/token expires in a long-running SSH session, or (most notably) when they detach a SCREEN session and return later. Solution is to manually run kinit/aklog or have k5start running in the background.
 * They can not access files (Timed out). Sometimes the volume is marked as needing salvage and becomes inaccessible. It is needed to run "vos salvage" on the user volume (not the whole partition!).
 * They can not access files (Timed out). Sometimes this is due to perceived inaccessibility of the fileserver. It helps if one runs '''fs checks; fs checkv'''.

In 2007, at the time of switch to Peer1 colocation provider and expanding our infrastructure, we decided to use AFS (OpenAFS) as the basis for our technical setup.

AFS is, strictly speaking, just a distributed filesystem, but it mandates usage of Kerberos and has a whole set of its own rules. Since we have decided to keep all our data files in AFS, the config and init scripts of most (if not all) services had to be modified to support AFS.

We have configured all traditional Unix services, DomTool, Exim and even MySQL and PostgreSQL to use AFS, and where possible, services fork processes under corresponding user privileges and obtain users' AFS identity. (Although as of Jan 2010, databases are no longer in AFS but on the usual Ext3 partitions. This was needed to solve database performance and reliability issues, and was made possible with the purchase of the new machine, so one of the old machines could be rearranged to perform new tasks).

Basic Architecture

Using the shared filesystem involves a combination of Kerberos and OpenAFS.

File conventions

The /afs tree contains shared filesystems. /afs/hcoop.net (symlinked from /afs/hcoop as well) is our piece of the AFS-o-sphere, but is not (yet) listed in the global CellServDB.

Subdirectories include:

  • /afs/hcoop.net/user, the home of home directories

  • /afs/hcoop.net/user/U/US/$USERNAME, $USERNAME's home directory

  • /afs/hcoop.net/common/etc, the home of non-platform-specific fun stuff like DomTool

  • /afs/hcoop.net/common/databases, databases (no longer used)

Connecting to AFS

Upon login (which goes through PAM krb5 and afs modules), Kerberos ticket and AFS token should automatically be initialized for the user, and they should find themselves in their home directory.

Users wishing to manually authenticate can run kinit and aklog (see manpages for all options) to obtain ticket and token, and klist -5f and tokens to verify.

Also, AFS is a distributed filesystem and allows access from users' workstations. Using kinit and aklog cmdline switches, one can remotely authenticate to cell HCOOP.NET and then directly SSH to HCoop without a password, or better yet, access their home directory from their local workstation, in /afs/hcoop.net/user/U/US/$USERNAME.

Users and tokens

Every HCoop user "owns" a Kerberos principal and AFS PTS entry named after their username. This "account" is intended to be used only interactively (people using it).

For each, there's also another principal named "$USER/daemon" in Kerberos (and "$USER.daemon" in AFS). This principal's key is exported into file /etc/keytabs/user.daemon/$USER on all relevant machines and is chown-ed to the user's Unix account. This allows users' batch/noninteractive scripts to authenticate to Krb/AFS using password from a file.

This also allow for more fine-grained control as permissions need to be explicitly granted to $USER.daemon in order to do anything with the data. So even if the service running under certain Unix user (or root!) is compromised, the attacker's choice of action will be minimal.

Furthermore, user tickets and tokens expire periodically. One has to either invoke kinit/aklog again, or set up tools such as k5start to perform automatic renewal.

Privileges

AFS uses ACLs, a more elaborate permissions model than the traditional Unix rwx modes. (Although the benefit is not that great any more, with the availability of POSIX ACLs for Linux).

However, there are a few intrinsic AFS properties that must be mentioned:

  1. AFS ACLs are per directory. All contained files inherit directory's ACL. (A subdirectory can define its own ACLs, of course).
  2. When a subdirectory is created, it inherits ACL of its parent. (Much better approach than as with Unix filesystems where you need +s on the immediate parent directory to get this behavior).
  3. It's possible to make user files unreadable to an attacker, even if they break in the "root" account on the machine

Permissions and quota

To give $USER.daemon the actual permission in AFS space, for most common actions, fs setacl DIR $USER.daemon read or write are good. All subdirectories that get created within that toplevel directory for which you give permissions, will, as said, inherit all the permissions, and this is what you want in 99% of the cases.

Listing and setting quotas

To list volume quota, run

fs lq DIR

To set volume quota in 1-kilobyte blocks, run

fs sq DIR -max SIZE

Problems

HCoop members have so far reported the following problems with AFS:

  • They can not access files (Permission denied). This happens when their ticket/token expires in a long-running SSH session, or (most notably) when they detach a SCREEN session and return later. Solution is to manually run kinit/aklog or have k5start running in the background.
  • They can not access files (Timed out). Sometimes the volume is marked as needing salvage and becomes inaccessible. It is needed to run "vos salvage" on the user volume (not the whole partition!).
  • They can not access files (Timed out). Sometimes this is due to perceived inaccessibility of the fileserver. It helps if one runs fs checks; fs checkv.

AndrewFileSystem (last edited 2018-11-15 03:45:21 by ClintonEbadi)