Administration of OpenAFS
Follow the InstallationProcedure and create a new Puppet managed host.
The AFS server will store its files in /vicepa. So, you should create that directory, ensuring it resides on whatever storage (raid, etc) you want to use for AFS backing. If it is not an independent volume, you must let AFS know that it is safe to use it by creating the AlwaysAttach file (touch /vicepa/AlwaysAttach).
To create the basic server config, simply include Puppet class hcoop::service::openafs::server. This will install the required Debian packages, set up firewall rules, install the BosConfig, and update CellServDB on all servers.
CellServDB still needs to be manually updated in Puppet, but will be replaced with a collector
To add the server to the cluster, all existing afs servers must be restarted. Run for each server in order, waiting 30 seconds between servers, with the newly added server last:
bos restart $server vlserver ptserver
Unfortunately this really is necessary.
The cluster will elect a new leader within five minutes, service should remain uninterrupted. Verify all daemons are running with bos status $server, and verify UBIK status with udebug $server 7002 and udebug $server 7003.
We want most of our readonly volumes to be replicated as widely as possible. The current set to clone is:
for vol in common.bin common.logs old root.afs root.cell; do vos addsite $newserver /vicepa $vol vos release $vol done
For the new afs server to actually be used, it needs to be discoverable via DNS. Edit the hcoop.net DomTool configuration and use the afsDBServer action to add the required DNS alias and SRV records, for example: afsDBServer "afsdb3" outpost_ip;.
TODO (See SetupNewAfsServer for rough idea).