Table of Contents
Instructions for the following procedures appear in the indicated section of this chapter.
The instructions make the following assumptions.
You have already installed your cell's first file server machine by following the instructions in Installing the First AFS Machine
You are logged in as the local superuser root
You are working at the console
A standard version of one of the operating systems supported by the current version of AFS is running on the machine
You can access the data on the OpenAFS Binary Distribution for your operating system, either on the local filesystem or via an NFS mount of the distribution's contents.
The procedure for installing a new file server machine is similar to installing the first file server machine in your cell. There are a few parts of the installation that differ depending on whether the machine is the same AFS system type as an existing file server machine or is the first file server machine of its system type in your cell. The differences mostly concern the source for the needed binaries and files, and what portions of the Update Server you install:
On a new system type, you must load files and binaries from the OpenAFS distribution. You may install the server portion of the Update Server to make this machine the binary distribution machine for its system type.
On an existing system type, you can copy files and binaries from a previously installed file server machine, rather than from the OpenAFS distribution. You may install the client portion of the Update Server to accept updates of binaries, because a previously installed machine of this type was installed as the binary distribution machine.
On some system types, distribtution of the appropriate binaries may be acheived using the system's own package management system. In these cases, it is recommended that this system is used, rather than installing the binaries by hand.
These instructions are brief; for more detailed information, refer to the corresponding steps in Installing the First AFS Machine.
To install a new file server machine, perform the following procedures:
Copy needed binaries and files onto this machine's local disk, as required.
Incorporate AFS modifications into the kernel
Configure partitions for storing volumes
Replace the standard fsck utility with the AFS-modified version on some system types
Start the Basic OverSeer (BOS) Server
Start the appropriate portion of the Update Server, if required
Start the fs process, which incorporates three component processes: the File Server, Volume Server, and Salvager
After completing the instructions in this section, you can install database server functionality on the machine according to the instructions in Installing Database Server Functionality.
If your operating systems AFS distribution is supplied as packages, such as .rpms or .debs, you should just install those packages as detailed in the previous chapter.
Create the /usr/afs and /usr/vice/etc directories on the local disk. Subsequent instructions copy files from the AFS distribution into them, at the appropriate point for each system type.
# mkdir /usr/afs # mkdir /usr/afs/bin # mkdir /usr/vice # mkdir /usr/vice/etc # mkdir /tmp/afsdist
As on the first file server machine, the initial procedures in installing an additional file server machine vary a good deal from platform to platform. For convenience, the following sections group together all of the procedures for a system type. Most of the remaining procedures are the same on every system type, but differences are noted as appropriate. The initial procedures are the following.
Incorporate AFS modifications into the kernel, either by using a dynamic kernel loader program or by building a new static kernel
Configure server partitions to house AFS volumes
Replace the operating system vendor's fsck program with a version that recognizes AFS data
If the machine is to remain an AFS client machine, modify the machine's authentication system so that users obtain an AFS token as they log into the local file system. (For this procedure only, the instructions direct you to the platform-specific section in Installing the First AFS Machine.)
To continue, proceed to the section for this system type:
Begin by running the AFS initialization script to call the insmod program, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes. You do not need to replace the Linux fsck program.
The procedure for starting up OpenAFS depends upon your distribution
For Fedora and RedHat Enterprise Linux systems (or their derivateds), download and install the RPM set for your operating system from the OpenAFS distribution site. You will need the openafs and openafs-server packages, along with an openafs-kernel package matching your current, running, kernel. If you wish to install client functionality, you will also require the openafs-client package.
You can find the version of your current kernel by running
# uname -r
2.6.20-1.2933.fc6
Once downloaded, the packages may be installed with the rpm command
# rpm -U openafs-* openafs-client-* openafs-server-* openafs-kernel-*
For systems which are provided as a tarball, or built from source, unpack the distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdistdirectory. If you pick a different location, substitute this in all of the following examples. Once you have unpacked the distribution, change directory as indicated.
# cd /tmp/afsdist/linux/dest/root.client/usr/vice/etc
Copy the AFS kernel library files to the local /usr/vice/etc/modload directory.
The filenames for the libraries have the format libafs-version
.o, where
version
indicates the kernel build level. The string .mp
in the version
indicates that the file is appropriate for machines running a multiprocessor
kernel.
# cp -rp modload /usr/vice/etc
Copy the AFS initialization script to the local directory for initialization files (by convention, /etc/rc.d/init.d on Linux machines). Note the removal of the .rc extension as you copy the script.
# cp -p afs.rc /etc/rc.d/init.d/afs
Create a directory called /vicepxx
for each AFS
server partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxx
Add a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk partition to be mounted on it.
/dev/disk
/vicepxx
ext2 defaults 0 2
The following is an example for the first partition being configured.
/dev/sda8 /vicepa ext2 defaults 0 2
Create a file system on each partition that is to be mounted at a /vicepxx
directory. The following command is probably appropriate,
but consult the Linux documentation for more information.
# mkfs -v /dev/disk
Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.
If the machine is to remain an AFS client, incorporate AFS into its authentication system, following the instructions in Enabling AFS Login on Linux Systems.
Proceed to Starting Server Programs.
Begin by running the AFS initialization script to call the modload program, which dynamically loads AFS modifications into the kernel. Then configure partitions and replace the Solaris fsck program with a version that correctly handles AFS volumes.
Unpack the OpenAFS Solaris distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a diferent location, substitute this in all of the following exmaples. Once you have unpacked the distribution, change directory as indicated.
# cd /tmp/afsdist/sun4x_56/dest/root.client/usr/vice/etc
Copy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on Solaris machines). Note the removal of the .rc extension as you copy the script.
# cp -p afs.rc /etc/init.d/afs
Copy the appropriate AFS kernel library file to the local file /kernel/fs/afs.
If the machine is running Solaris 11 on the x86_64 platform:
# cp -p modload/libafs64.o /kernel/drv/amd64/afs
If the machine is running Solaris 10 on the x86_64 platform:
# cp -p modload/libafs64.o /kernel/fs/amd64/afs
If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server functionality, and the nfsd process is running:
# cp -p modload/libafs.o /kernel/fs/afs
If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS server functionality or the nfsd process is not running:
# cp -p modload/libafs.nonfs.o /kernel/fs/afs
If the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the nfsd process is running:
# cp -p modload/libafs64.o /kernel/fs/sparcv9/afs
If the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server functionality or the nfsd process is not running:
# cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afs
Run the AFS initialization script to load AFS modifications into the kernel. You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/init.d/afs start
When an entry called afs
does not already exist in the local /etc/name_to_sysnum file, the script automatically creates it and reboots the machine to start
using the new version of the file. If this happens, log in again as the superuser root after the reboot and run the initialization script again. This time the required entry
exists in the /etc/name_to_sysnum file, and the modload program runs.
login: root
Password: root_password
# /etc/init.d/afs start
Create the /usr/lib/fs/afs directory to house the AFS-modified fsck program and related files.
# mkdir /usr/lib/fs/afs # cd /usr/lib/fs/afs
Copy the vfsck binary to the newly created directory, changing the name as you do so.
# cp /cdrom/sun4x_56/dest/root.server/etc/vfsck fsck
Working in the /usr/lib/fs/afs directory, create the following links to Solaris libraries:
# ln -s /usr/lib/fs/ufs/clri # ln -s /usr/lib/fs/ufs/df # ln -s /usr/lib/fs/ufs/edquota # ln -s /usr/lib/fs/ufs/ff # ln -s /usr/lib/fs/ufs/fsdb # ln -s /usr/lib/fs/ufs/fsirand # ln -s /usr/lib/fs/ufs/fstyp # ln -s /usr/lib/fs/ufs/labelit # ln -s /usr/lib/fs/ufs/lockfs # ln -s /usr/lib/fs/ufs/mkfs # ln -s /usr/lib/fs/ufs/mount # ln -s /usr/lib/fs/ufs/ncheck # ln -s /usr/lib/fs/ufs/newfs # ln -s /usr/lib/fs/ufs/quot # ln -s /usr/lib/fs/ufs/quota # ln -s /usr/lib/fs/ufs/quotaoff # ln -s /usr/lib/fs/ufs/quotaon # ln -s /usr/lib/fs/ufs/repquota # ln -s /usr/lib/fs/ufs/tunefs # ln -s /usr/lib/fs/ufs/ufsdump # ln -s /usr/lib/fs/ufs/ufsrestore # ln -s /usr/lib/fs/ufs/volcopy
Append the following line to the end of the file /etc/dfs/fstypes.
afs AFS Utilities
Edit the /sbin/mountall file, making two changes.
Add an entry for AFS to the case
statement for option 2, so that it reads
as follows:
case "$2" in ufs) foptions="-o p" ;; afs) foptions="-o p" ;; s5) foptions="-y -t /var/tmp/tmp$$ -D" ;; *) foptions="-y" ;;
Edit the file so that all AFS and UFS partitions are checked in parallel. Replace the following section of code:
# For fsck purposes, we make a distinction between ufs and # other file systems # if [ "$fstype" = "ufs" ]; then ufs_fscklist="$ufs_fscklist $fsckdev" saveentry $fstype "$OPTIONS" $special $mountp continue fi
with the following section of code:
# For fsck purposes, we make a distinction between ufs/afs # and other file systems. # if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then ufs_fscklist="$ufs_fscklist $fsckdev" saveentry $fstype "$OPTIONS" $special $mountp continue fi
Create a directory called /vicepxx
for each AFS
server partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxx
Add a line with the following format to the file systems registry file, /etc/vfstab, for each partition to be mounted on a directory created in the previous step. Note
the value afs
in the fourth field, which tells Solaris to use the AFS-modified
fsck program on this partition.
/dev/dsk/disk
/dev/rdsk/disk
/vicepxx
afsboot_order
yes
The following is an example for the first partition being configured.
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
Create a file system on each partition that is to be mounted at a /vicepxx
directory. The following command is probably appropriate,
but consult the Solaris documentation for more information.
# newfs -v /dev/rdsk/disk
Issue the mountall command to mount all partitions at once.
If the machine is to remain an AFS client, incorporate AFS into its authentication system, following the instructions in Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris Systems.
Proceed to Starting Server Programs.
In this section you initialize the BOS Server, the Update Server, and the fs process. You begin by copying the necessary server files to the local disk.
Copy file server binaries to the local /usr/afs/bin directory.
On a machine of an existing system type, you can either copy files from the OpenAFS binary distribution or use a remote file transfer protocol to copy files from an existing server machine of the same system type. To load from the binary distribution, see the instructions just following for a machine of a new system type. If using a remote file transfer protocol, copy the complete contents of the existing server machine's /usr/afs/bin directory.
If you are working from a tarball distribtion, rather than one distributed in a packaged format, you must use the following instructions to copy files from the OpenAFS Binary Distribution.
Unpack the distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a different location, substitute this in all of the following examples.
Copy files from the distribution to the local /usr/afs directory.
# cd /tmp/afsdist/sysname
/root.server/usr/afs
# cp -rp * /usr/afs
Copy the contents of the /usr/afs/etc directory from an existing file server machine, using a remote file transfer protocol such as sftp or scp. If you use a system control machine, it is best to copy the contents of its /usr/afs/etc directory. If you choose not to run a system control machine, copy the directory's contents from any existing file server machine.
Change to the /usr/afs/bin directory and start the BOS Server (bosserver process). Include the -noauth flag to prevent the AFS processes from performing authorization checking. This is a grave compromise of security; finish the remaining instructions in this section in an uninterrupted pass.
# cd /usr/afs/bin # ./bosserver -noauth
If you run a system control machine, create the upclientetc process as an instance of the client portion of the Update Server. It accepts updates of the common configuration files stored in the system control machine's /usr/afs/etc directory from the upserver process (server portion of the Update Server) running on that machine. The cell's first file server machine was installed as the system control machine in Starting the Server Portion of the Update Server. (If you do not run a system control machine, you must update the contents of the /usr/afs/etc directory on each file server machine, using the appropriate bos commands.)
By default, the Update Server performs updates every 300 seconds (five minutes). Use the -t argument to specify a different number of seconds. For the
machine�name
argument, substitute the name of the machine you are installing. The
command appears on multiple lines here only for legibility reasons.
# ./bos create <machine name
> upclientetc simple \ "/usr/afs/bin/upclient <system control machine
> \ [-t <time
>] /usr/afs/etc" -cell <cell name
> -noauth
Create an instance of the Update Server to handle distribution of the file server binaries stored in the /usr/afs/bin directory. If your architecture using a package management system such as 'rpm' or 'apt' to maintain its binaries, note that distributing binaries via this system may interfere with your local package management tools.
If this is the first file server machine of its AFS system type, create the upserver process as an instance of the server portion of the Update Server. It distributes its copy of the file server process binaries to the other file server machines of this system type that you install in future. Creating this process makes this machine the binary distribution machine for its type.
# ./bos create <machine name
> upserver simple \ "/usr/afs/bin/upserver -clear /usr/afs/bin" \ -cell <cell name
> -noauth
If this machine is an existing system type, create the upclientbin process as an instance of the client portion of the Update Server. It accepts updates of the AFS binaries from the upserver process running on the binary distribution machine for its system type. For distribution to work properly, the upserver process must already by running on that machine.
Use the -clear argument to specify that the upclientbin process requests unencrypted transfer of the binaries in the /usr/afs/bin directory. Binaries are not sensitive and encrypting them is time-consuming.
By default, the Update Server performs updates every 300 seconds (five minutes). Use the -t argument to specify an different number of seconds.
# ./bos create <machine name
> upclientbin simple \ "/usr/afs/bin/upclient <binary distribution machine
> \ [-t <time
>] -clear /usr/afs/bin" -cell <cell name
> -noauth
Issue the bos create command to start the fs process or the dafs process, depending on if you want to run the Demand-Attach File Server or not. See Appendix C, The Demand-Attach File Server for more information on whether you want to run it or not.
If you do not want to run the Demand-Attach File Server, start the fs process, which binds together the File Server, Volume Server, and Salvager.
# ./bos create <machine name
> fs fs \ /usr/afs/bin/fileserver /usr/afs/bin/volserver \ /usr/afs/bin/salvager -cell <cell name
> -noauth
If you want to run the Demand-Attach File Server, start the dafs process, which binds together the File Server, Volume Server, Salvage Server, and Salvager.
# ./bos create <machine name
> dafs dafs \ /usr/afs/bin/dafileserver /usr/afs/bin/davolserver \ /usr/afs/bin/salvageserver \ /usr/afs/bin/dasalvager -cell <cell name
> -noauth
If you want this machine to be a client as well as a server, follow the instructions in this section. Otherwise, skip to Completing the Installation.
Begin by loading the necessary client files to the local disk. Then create the necessary configuration files and start the Cache Manager. For more detailed explanation of the procedures involved, see the corresponding instructions in Installing the First AFS Machine (in the sections following Overview: Installing Client Functionality).
If another AFS machine of this machine's system type exists, the AFS binaries are probably already accessible in your
AFS filespace (the conventional location is /afs/cellname
/sysname
/usr/afsws). If not, or if this is
the first AFS machine of its type, copy the AFS binaries for this system type into an AFS volume by following the instructions
in Storing AFS Binaries in AFS. Because this machine is not yet an AFS client, you must perform
the procedure on an existing AFS machine. However, remember to perform the final step (linking the local directory /usr/afsws to the appropriate location in the AFS file tree) on this machine itself. If you also want
to create AFS volumes to house UNIX system binaries for the new system type, see Storing System
Binaries in AFS.
Copy client binaries and files to the local disk.
On a machine of an existing system type, you can either load files from the OpenAFS Binary Distribution or use a remote file transfer protocol to copy files from an existing server machine of the same system type. To load from the binary distribution, see the instructions just following for a machine of a new system type. If using a remote file transfer protocol, copy the complete contents of the existing client machine's /usr/vice/etc directory.
On a machine of a new system type, you must use the following instructions to copy files from the OpenAFS Binary Distribution. If your distribution is provided in a packaged format, then simply installing the packages will perform the necessary actions.
Unpack the distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a different location, substitute this in all of the following examples.
Copy files to the local /usr/vice/etc directory.
This step places a copy of the AFS initialization script (and related files, if applicable) into the /usr/vice/etc directory. In the preceding instructions for incorporating AFS into the kernel, you copied the script directly to the operating system's conventional location for initialization files. When you incorporate AFS into the machine's startup sequence in a later step, you can choose to link the two files.
On some system types that use a dynamic kernel loader program, you previously copied AFS library files into a subdirectory of the /usr/vice/etc directory. On other system types, you copied the appropriate AFS library file directly to the directory where the operating system accesses it. The following commands do not copy or recopy the AFS library files into the /usr/vice/etc directory, because on some system types the library files consume a large amount of space. If you want to copy them, add the -r flag to the first cp command and skip the second cp command.
# cd /tmp/afsdist/sysname
/root.client/usr/vice/etc
# cp -p * /usr/vice/etc
# cp -rp C /usr/vice/etc
Change to the /usr/vice/etc directory and create the ThisCell file as a copy of the /usr/afs/etc/ThisCell file. You must first remove the symbolic link to the /usr/afs/etc/ThisCell file that the BOS Server created automatically in Starting Server Programs.
# cd /usr/vice/etc # rm ThisCell # cp /usr/afs/etc/ThisCell ThisCell
Remove the symbolic link to the /usr/afs/etc/CellServDB file.
# rm CellServDB
Create the /usr/vice/etc/CellServDB file. Use a network file transfer program such as sftp or scp to copy it from one of the following sources, which are listed in decreasing order of preference:
Your cell's central CellServDB source file (the conventional location is
/afs/cellname
/common/etc/CellServDB)
The global CellServDB file maintained at grand.central.org
An existing client machine in your cell
The CellServDB.sample
file included in the
sysname
/root.client/usr/vice/etc
directory of each OpenAFS distribution; add an entry for the
local cell by following the instructions in
Creating the Client CellServDB File
Create the cacheinfo file for either a disk cache or a memory cache. For a discussion of the appropriate values to record in the file, see Configuring the Cache.
To configure a disk cache, issue the following commands. If you are devoting a partition exclusively to caching, as recommended, you must also configure it, make a file system on it, and mount it at the directory created in this step.
# mkdir /usr/vice/cache
# echo "/afs:/usr/vice/cache:#blocks
" > cacheinfo
To configure a memory cache:
# echo "/afs:/usr/vice/cache:#blocks
" > cacheinfo
Create the local directory on which to mount the AFS filespace, by convention /afs. If the directory already exists, verify that it is empty.
# mkdir /afs
On non-packaged Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory, removing the .conf extension as you do so.
# cp /usr/vice/etc/afs.conf /etc/sysconfig/afs
Edit the machine's AFS initialization script or afsd options file to set appropriate values for afsd command parameters. The script resides in the indicated location on each system type:
On Fedora and RHEL systems, /etc/sysconfig/openafs. Note that this file has a different format from a standard afsd options file.
On non-packaged Linux systems, /etc/sysconfig/afs (the afsd options file)
On Solaris systems, /etc/init.d/afs
Use one of the methods described in Configuring the Cache Manager to add the following flags to the afsd command line. If you intend for the machine to remain an AFS client, also set any performance-related arguments you wish.
Add the -memcache flag if the machine is to use a memory cache.
Add the -verbose flag to display a trace of the Cache Manager's initialization on the standard output stream.
Add the --dynroot or --afsdb options if you wish to have a synthetic AFS root, as discussed in Enabling Access to Foreign Cells
If appropriate, follow the instructions in Storing AFS Binaries in AFS to copy the AFS binaries for this system type into an AFS volume. See the introduction to this section for further discussion.
At this point you run the machine's AFS initialization script to verify that it correctly loads AFS modifications into the kernel and starts the BOS Server, which starts the other server processes. If you have installed client files, the script also starts the Cache Manager. If the script works correctly, perform the steps that incorporate it into the machine's startup and shutdown sequence. If there are problems during the initialization, attempt to resolve them. The AFS Product Support group can provide assistance if necessary.
If the machine is configured as a client using a disk cache, it can take a while for the afsd program to create all of the Vn
files
in the cache directory. Messages on the console trace the initialization process.
Issue the bos shutdown command to shut down the AFS server processes other than the BOS Server. Include the -wait flag to delay return of the command shell prompt until all processes shut down completely.
# /usr/afs/bin/bos shutdown <machine name
> -wait
Issue the ps command to learn the BOS Server's process ID number (PID), and then the kill command to stop the bosserver process.
# psappropriate_ps_options
| grep bosserver # killbosserver_PID
Run the AFS initialization script by issuing the appropriate commands for this system type.
On Fedora or RHEL Linux systems:
Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_password
Run the OpenAFS initialization scripts.
# /etc/rc.d/init.d/openafs-client start # /etc/rc.d/init.d/openafs-server start
Issue the chkconfig
command to activate the
openafs-client and
openafs-server configuration
variables. Based on the instruction in the AFS initialization
files that begins with the string
#chkconfig
, the command
automatically creates the symbolic links that incorporate the
script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add openafs-client # /sbin/chkconfig --add openafs-server
On Linux systems:
Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_password
Run the OpenAFS initialization script.
# /etc/rc.d/init.d/afs start
Issue the chkconfig command to activate the afs configuration variable. Based on the instruction in the AFS initialization file that
begins with the string #chkconfig
, the command automatically creates the symbolic
links that incorporate the script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add afs
(Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/rc.d/init.d directories, and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You can always retrieve the original script or options file from the AFS CD-ROM if necessary.
# cd /usr/vice/etc # rm afs.rc afs.conf # ln -s /etc/rc.d/init.d/afs afs.rc # ln -s /etc/sysconfig/afs afs.conf
Proceed to Step 4.
On Solaris systems:
Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_password
Run the AFS initialization script.
# /etc/init.d/afs start
Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and shutdown sequence.
# cd /etc/init.d # ln -s ../init.d/afs /etc/rc3.d/S99afs # ln -s ../init.d/afs /etc/rc0.d/K66afs
(Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the OpenAFS Binary Distribution if necessary.
# cd /usr/vice/etc # rm afs.rc # ln -s /etc/init.d/afs afs.rc
Verify that /usr/afs and its subdirectories on the new file server machine meet the ownership and mode bit requirements outlined in Protecting Sensitive AFS Directories. If necessary, use the chmod command to correct the mode bits.
To configure this machine as a database server machine, proceed to Installing Database Server Functionality.