Creating Volumes to Simplify Administration

This section discusses how to create volumes in ways that make administering your system easier.

At the top levels of your file tree (at least through the third level), each directory generally corresponds to a separate volume. Some cells also configure the subdirectories of some third level directories as separate volumes. Common examples are the /afs/cellname/common and /afs/cellname/usr directories.

You do not have to create a separate volume for every directory level in a tree, but the advantage is that each volume tends to be smaller and easier to move for load balancing. The overhead for a mount point is no greater than for a standard directory, nor does the volume structure itself require much disk space. Most cells find that below the fourth level in the tree, using a separate volume for each directory is no longer efficient. For instance, while each user's home directory (at the fourth level in the tree) corresponds to a separate volume, all of the subdirectories in the home directory normally reside in the same volume.

Keep in mind that only one volume can be mounted at a given directory location in the tree. In contrast, a volume can be mounted at several locations, though this is not recommended because it distorts the hierarchical nature of the file tree, potentially causing confusion.

Assigning Volume Names

You can name your volumes anything you choose, subject to a few restrictions:

  • Read/write volume names can be up to 22 characters in length. The maximum length for volume names is 31 characters, and there must be room to add the .readonly extension on read-only volumes.

  • Do not add the .readonly and .backup extensions to volume names yourself, even if they are appropriate. The Volume Server adds them automatically as it creates a read-only or backup version of a volume.

  • There must be volumes named root.afs and root.cell, mounted respectively at the top (/afs) level in the filespace and just below that level, at the cell's name (for example, at /afs/example.com in the Example Corporation cell).

    Deviating from these names only creates confusion and extra work. Changing the name of the root.afs volume, for instance, means that you must use the -rootvol argument to the afsd program on every client machine, to name the alternate volume.

    Similarly, changing the root.cell volume name prevents users in foreign cells from accessing your filespace, if the mount point for your cell in their filespace refers to the conventional root.cell name. Of course, this is one way to make your cell invisible to other cells.

It is best to assign volume names that indicate the type of data they contain, and to use similar names for volumes with similar contents. It is also helpful if the volume name is similar to (or at least has elements in common with) the name of the directory at which it is mounted. Understanding the pattern then enables you accurately to guess what a volume contains and where it is mounted.

Many cells find that the most effective volume naming scheme puts a common prefix on the names of all related volumes. Table 1 describes the recommended prefixing scheme.

Table�1.�Suggested volume prefixes

PrefixContentsExample NameExample Mount Point
common.popular programs and filescommon.etc/afs/cellname/common/etc
src.source codesrc.afs/afs/cellname/src/afs
proj.project dataproj.portafs/afs/cellname/proj/portafs
test.testing or other temporary datatest.smith/afs/cellname/usr/smith/test
user.user home directory datauser.terry/afs/cellname/usr/terry
sys_type.programs compiled for an operating system typers_aix42.bin/afs/cellname/rs_aix42/bin

Table 2 is a more specific example for a cell's rs_aix42 system volumes and directories:

Table�2.�Example volume-prefixing scheme

Example NameExample Mount Point
rs_aix42.bin/afs/cellname/rs_aix42/bin, /afs/cellname/rs_aix42/bin
rs_aix42.etc/afs/cellname/rs_aix42/etc
rs_aix42.usr/afs/cellname/rs_aix42/usr
rs_aix42.usr.afsws/afs/cellname/rs_aix42/usr/afsws
rs_aix42.usr.lib/afs/cellname/rs_aix42/usr/lib
rs_aix42.usr.bin/afs/cellname/rs_aix42/usr/bin
rs_aix42.usr.etc/afs/cellname/rs_aix42/usr/etc
rs_aix42.usr.inc/afs/cellname/rs_aix42/usr/inc
rs_aix42.usr.man/afs/cellname/rs_aix42/usr/man
rs_aix42.usr.sys/afs/cellname/rs_aix42/usr/sys
rs_aix42.usr.local/afs/cellname/rs_aix42/usr/local

There are several advantages to this scheme:

  • The volume name is similar to the mount point name in the filespace. In all of the entries in Table 2, for example, the only difference between the volume and mount point name is that the former uses periods as separators and the latter uses slashes. Another advantage is that the volume name indicates the contents, or at least suggests the directory on which to issue the ls command to learn the contents.

  • It makes it easy to manipulate groups of related volumes at one time. In particular, the vos backupsys command's -prefix argument enables you to create a backup version of every volume whose name starts with the same string of characters. Making a backup version of each volume is one of the first steps in backing up a volume with the AFS Backup System, and doing it for many volumes with one command saves you a good deal of typing. For instructions for creating backup volumes, see Creating Backup Volumes, For information on the AFS Backup System, see Configuring the AFS Backup System and Backing Up and Restoring AFS Data.

  • It makes it easy to group related volumes together on a partition. Grouping related volumes together has several advantages of its own, discussed in Grouping Related Volumes on a Partition.

Grouping Related Volumes on a Partition

If your cell is large enough to make it practical, consider grouping related volumes together on a partition. In general, you need at least three file server machines for volume grouping to be effective. Grouping has several advantages, which are most obvious when the file server machine becomes inaccessible:

  • If you keep a hardcopy record of the volumes on a partition, you know which volumes are unavailable. You can keep such a record without grouping related volumes, but a list composed of unrelated volumes is much harder to maintain. Note that the record must be on paper, because the outage can prevent you from accessing an online copy or from issuing the vos listvol command, which gives you the same information.

  • The effect of an outage is more localized. For example, if all of the binaries for a given system type are on one partition, then only users of that system type are affected. If a partition houses binary volumes from several system types, then an outage can affect more people, particularly if the binaries that remain available are interdependent with those that are not available.

The advantages of grouping related volumes on a partition do not necessarily extend to the grouping of all related volumes on one file server machine. For instance, it is probably unwise in a cell with two file server machines to put all system volumes on one machine and all user volumes on the other. An outage of either machine probably affects everyone.

Admittedly, the need to move volumes for load balancing purposes can limit the practicality of grouping related volumes. You need to weigh the complementary advantages case by case.

When to Replicate Volumes

As discussed in Replication, replication refers to making a copy, or clone, of a read/write source volume and then placing the copy on one or more additional file server machines. Replicating a volume can increase the availability of the contents. If one file server machine housing the volume becomes inaccessible, users can still access the copy of the volume stored on a different machine. No one machine is likely to become overburdened with requests for a popular file, either, because the file is available from several machines.

However, replication is not appropriate for all cells. If a cell does not have much disk space, replication can be unduly expensive, because each clone not on the same partition as the read/write source takes up as much disk space as its source volume did at the time the clone was made. Also, if you have only one file server machine, replication uses up disk space without increasing availability.

Replication is also not appropriate for volumes that change frequently. You must issue the vos release command every time you need to update a read-only volume to reflect changes in its read/write source.

For both of these reasons, replication is appropriate only for popular volumes whose contents do not change very often, such as system binaries and other volumes mounted at the upper levels of your filespace. User volumes usually exist only in a read/write version since they change so often.

If you are replicating any volumes, you must replicate the root.afs and root.cell volumes, preferably at two or three sites each (even if your cell only has two or three file server machines). The Cache Manager needs to pass through the directories corresponding to the root.afs and root.cell volumes as it interprets any pathname. The unavailability of these volumes makes all other volumes unavailable too, even if the file server machines storing the other volumes are still functioning.

Another reason to replicate the root.afs volume is that it can lessen the load on the File Server machine. The Cache Manager has a bias to access a read-only version of the root.afs volume if it is replicate, which puts the Cache Manager onto the read-only path through the AFS filespace. While on the read-only path, the Cache Manager attempts to access a read-only copy of replicated volumes. The File Server needs to track only one callback per Cache Manager for all of the data in a read-only volume, rather than the one callback per file it must track for read/write volumes. Fewer callbacks translate into a smaller load on the File Server.

If the root.afs volume is not replicated, the Cache Manager follows a read/write path through the filespace, accessing the read/write version of each volume. The File Server distributes and tracks a separate callback for each file in a read/write volume, imposing a greater load on it.

For more on read/write and read-only paths, see The Rules of Mount Point Traversal.

It also makes sense to replicate system binary volumes in many cases, as well as the volume corresponding to the /afs/cellname/usr directory and the volumes corresponding to the /afs/cellname/common directory and its subdirectories.

It is a good idea to place a replica on the same partition as the read/write source. In this case, the read-only volume is a clone (like a backup volume): it is a copy of the source volume's vnode index, rather than a full copy of the volume contents. Only if the read/write volume moves to another partition or changes substantially does the read-only volume consume significant disk space. Read-only volumes kept on other servers' partitions always consume the full amount of disk space that the read/write source consumed when the read-only volume was created.

You cannot have a replica volume on a different partition of the same server hosting the read/write volume. "Cheap" read-only volumes must be on the same partition as the read/write; all other read-only volumes must be on different servers.

The Default Quota and ACL on a New Volume

Every AFS volume has associated with it a quota that limits the amount of disk space the volume is allowed to use. To set and change quota, use the commands described in Setting and Displaying Volume Quota and Current Size.

By default, every new volume is assigned a space quota of 5000 KB blocks unless you include the -maxquota argument to the vos create command. Also by default, the ACL on the root directory of every new volume grants all permissions to the members of the system:administrators group. To learn how to change these values when creating an account with individual commands, see To create one user account with individual commands. When using uss commands to create accounts, you can specify alternate ACL and quota values in the template file's V instruction; see Creating a Volume with the V Instruction.