This chapter describes the various file systems provided by the Tru64 UNIX operating system. Following a brief overview (Section 4.1), the following file systems are discussed:
Virtual File System (Section 4.2)
Advanced File System (Section 4.3)
UNIX File System (Section 4.4)
Cluster File System (Section 4.5)
Network File System (Section 4.6)
CD-ROM File System (Section 4.7)
DVD File System (Section 4.8)
Memory File System (Section 4.9)
The
/proc
File System (Section 4.10)
File-on-File Mounting File System (Section 4.11)
File Descriptor File System (Section 4.12)
The file systems provided by Tru64 UNIX are all accessed through a Virtual File System (VFS) layer, and are integrated with the virtual memory Unified Buffer Cache (UBC).
The file system that you see is handled by the Virtual File System layer,
which interacts with the local file system or the networked file system.
Under Tru64 UNIX,
the default file system is the Advanced File System (AdvFS), although the
traditional UNIX File System (UFS) is also available.
From there, the networked
file system or the local file system might interface with the Logical Storage
Manager, and in turn, the device drivers and the physical storage devices.
Figure 4-1
illustrates this interplay of the file systems,
the Logical Storage Manager (LSM), and the physical storage devices.
Figure 4-1: File Systems
For information about supported file systems in a cluster, see the
Cluster Technical Overview.
4.2 Virtual File System
The Virtual File System (VFS) is based on the Berkeley 4.3 Reno virtual file system. VFS presents a uniform interface to users and applications, an interface that is abstracted from the file system layer to allow common access to files, regardless of the file system on which they reside. As a result, file access across different file systems is transparent to the user.
A structure known as a
vnode
contains information
about each file in a mounted file system.
The
vnodes
are
analogous to inodes: they are more or less wrappers around file system-specific
nodes.
If, for example, a read or write request is made on a file, the
vnode
points the system call to the routine appropriate for that
file system.
A read request is pointed to
advfs_read
when
the request is made on a file in AdvFS; to
ufs_read
when
the request is made on a file in a UFS; or to
nfs_read
when the request is made on a file in an NFS-mounted file system.
The Tru64 UNIX VFS implementation supports Extended File Attributes
(XFAs), including support for any application that wants to assign an XFA
to a file.
Both AdvFS and UFS support XFAs.
For more information on XFAs,
see the
setproplist
(2)4.3 Advanced File System
The Advanced File System (AdvFS), the default root file system for Tru64 UNIX, provides flexibility, compatibility, data availability, high performance, and simplified system management. This log-based file system handles files and filesets approaching 16 terabytes in length.
The configuration of AdvFS differs from the traditional UNIX file system. In AdvFS, the physical storage layer is managed independently of the directory layer. System administrators can add and remove storage without unmounting the file system or halting the operating system. As a result, configuration planning is less complicated and more flexible.
From a user's perspective, AdvFS behaves like any other UNIX file system.
You can use the
mkdir
command to create new directories,
the
cd
command to change directories, and the
ls
command to list directory contents.
AdvFS logical structures,
quota controls, and backup capabilities are based on traditional file system
design.
AdvFS has its own complement of file system maintenance utilities,
including
mkfdmn
and
mkfset
, which create
file systems, and
vdump
and
vrestore
,
which back up and restore filesets.
AdvFS commands and utilities are described
in the
AdvFS Administration
manual.
Without taking an AdvFS off line, system administrators can perform backups, file system reconfiguration, and file system tuning. End users can retrieve their own unintentionally deleted files from predefined trashcan directories or from clone filesets without assistance from system administrators.
The separately licensed AdvFS Utilities provide additional file management capabilities and a Web-based graphical user interface to simplify system administration. The graphical interface, which runs under the Common Desktop Environment (CDE), features menus, graphical displays, and comprehensive online help that make it easy to perform AdvFS operations. In addition, the graphical interface displays summarized system status information.
The AdvFS Utilities support multivolume file systems, which enables
file-level striping (spreading data to more than one volume) to improve file
transfer rates for I/O intensive applications.
Logical Storage Manager (LSM),
which allows volume-level striping, can be incorporated into AdvFS configurations.
4.4 UNIX File System
The UNIX File System (UFS) is a local file system. At one time, UFS was the principal file system, and it is still an alternative to AdvFS. Many administrators choose to use the familiar UFS file system on system disks or in instances where the advanced features of AdvFS are not required. AdvFS and UFS can coexist on a system.
UFS is compatible with the Berkeley 4.3 Tahoe release.
It allows a pathname
component to be 255 bytes, with the fully qualified pathname length restriction
of 1023 bytes.
The Tru64 UNIX implementation of UFS supports file sizes
that exceed 2 GB.
4.5 Cluster File System
In a TruCluster Server cluster, the Cluster File System (CFS) is a virtual file system that sits above the physical file systems to provide clusterwide access to mounted file systems.
In general, the CFS makes all files visible to all cluster members and accessible by each member. Each cluster member has the same view; regardless of whether a file is stored on a device that is connected to all cluster members or on one that is private to a single member. By maintaining cache coherency across cluster members, CFS guarantees that all members at all times have the same view of file systems mounted in the cluster.
From the perspective of the CFS, each file system or AdvFS domain is served to the entire cluster by a single cluster member. Any cluster member can serve file systems on devices anywhere in the cluster. File systems mounted at cluster boot time are served by the first cluster member to have access to them. This means that file systems on devices on a bus private to one cluster member are served by that member.
For information about the Cluster File System, see the
Cluster Technical Overview.
4.6 Network File System
The Network File System (NFS) is a facility for sharing files in a heterogeneous environment of processors, operating systems, and networks. NFS does so by mounting a remote file system or directory on a local system and then reading or writing the files as though they were local.
The Tru64 UNIX environment supports NFS Version 3 and NFS Version 2. The NFS Version 2 code is based on ONC Version 4.2, which is licensed from Sun Microsystems; the NFS Version 3 code is derived from prototype code from Sun Microsystems.
Because Tru64 UNIX supports both NFS Version 3 and Version 2, the NFS client and server bind at mount time using the highest NFS version number they both support. For example, a Tru64 UNIX client will use NFS Version 3 when it is served by an NFS server that supports NFS Version 3; however, when it is served by an NFS server running only NFS Version 2, the NFS client will use NFS Version 2. For more detailed information on NFS Version 3, see the paper NFS Version 3: Design and Implementation (USENIX 1994).
In addition to the basic NFS services, Tru64 UNIX supports the following enhancements:
NFS over TCP
Write-gathering
NFS locking
Automounting
PC-NFS
WebNFS
NFS Version 3 supports all the features of NFS Version 2 and the following:
Improved performance
Support for reliable asynchronous writes, which improves write performance over NFS Version 2 by a factor of seven, thereby reducing client response latency and server I/O loading
Support for a
READDIRPLUS
procedure that
returns file handles and attributes with directory names to eliminate
LOOKUP
calls when scanning a directory
Support for servers to return metadata on all operations to
reduce the number of subsequent
GETATTR
procedure calls
Support for weak cache consistency data to allow a client to manage its caches more effectively
Improved security
Provides an
ACCESS
procedure that fixes
the problems in NFS Version 2 with superuser permission mapping, and
allows access checks at file-open time, so that the server can better support
Access Control Lists (ACLs)
File names and pathnames specified as strings of variable
length, with the maximum length negotiated between the client and server using
the
PATHCONF
procedure
Guaranteed exclusive creation of files
In addition to the NFS Version 3.0 functions, Tru64 UNIX features the following enhancements to NFS:
NFS over TCP
Although NFS has been traditionally run over the UDP protocol, Tru64 UNIX
supports NFS over the TCP protocol.
For more information, see the
mount
(8)
Write-gathering
On an NFS server, multiple synchronous write requests to the same file are combined to reduce the number of actual writes as much as possible. The data portions of successive writes are cached and a single metadata update is done that applies to all the writes. Replies are not sent to the client until all data and associated metadata are written to disk to ensure that write-gathering does not violate the NFS crash recovery design.
As a result, write-gathering increases write throughput by up to 100 percent and the CPU overhead associated with writes is substantially reduced, further increasing server capacity.
NFS locking
Using the
fcntl
system call to control access to
file regions, NFS locking allows you to place locks on file records over NFS
protecting segments of a shared, NFS-served database.
The status daemon,
rpc.statd
, monitors the NFS servers and maintains the NFS lock if
the server goes down.
When the NFS server comes back up, a reclaiming process
allows the lock to be reattached.
Automounting
On an NFS client, the
automount
and
autofsd
daemons offer alternatives to mounting remote file systems with
the
/etc/fstab
file, allowing you to mount the file systems
on an as-needed basis.
When a user on a system running one of these daemons invokes a command that must access a remotely mounted file or directory, the daemon mounts that file system or directory and keeps it mounted for as long as the user needs it. When a specified amount of time elapses (the default is 5 minutes) without the file system or directory being accessed, the daemon unmounts it.
You specify the file systems to be mounted in map files, which you can customize to suit your environment. You can administer map files locally or through NIS, or through a combination of the two,
Automounting NFS-mounted file systems provides the following advantages over static mounts:
If NIS maps are used and file systems are moved to other servers, users do not need to do anything to access the moved files. Every time the file systems need to be mounted, the daemon will mount them from the correct locations.
In the case of read-only files, if more than one NFS server is serving a given file system, the daemon will connect you to the first server that responds. If at least one of the servers is available, the mount will not hang.
By unmounting NFS-mounted file systems that have not been accessed for more than a certain interval (five minutes by default), the daemon conserves system resources, particularly memory.
The
autofsd
daemon has additional benefits over the
automount
daemon.
It is more efficient because it requires less
communication between the kernel and the user space daemon and it provides
higher availability.
Although
autofsd
must be running for mounts and unmounts
to be performed, if it is killed or becomes unavailable, existing auto-mounted
NFS file systems continue to be available.
For more information about the
automount
and
autofsd
daemons, see the
Network Administration: Services
manual, the
Release Notes
for Version 5.1B, and the
automount
(8)autofsd
(8)autofsmount
(8)
HP supports the PC-NFS server daemon,
pcnfsd
,
which allows PC clients with PC-NFS configured to do the following:
The PC-NFS
pcnfsd
daemon, in compliance with Versions
1.0 and 2.0 of the
pcnfsd
protocol, assigns UIDs and GIDs
to PC clients so that they can talk to NFS.
The
pcnfsd
daemon performs UNIX login-like password
and user name verification on the server for the PC client.
If the
authentication succeeds, the
pcnfsd
daemon then grants
the PC client the same permissions accorded to that user name.
The PC client
can mount NFS file systems by talking to the
mountd
daemon
as long as the NFS file systems are exported to the PC client in the
/etc/exports
file on the server.
Because there is no mechanism in Windows to perform file permission checking, the PC client calls the authentication server to check the user's credentials against the file's attributes. This happens when the PC client makes NFS requests to the server for file access that requires permission checking, such as opening a file.
Access network printers
The
pcnfsd
daemon authenticates the PC client and
then spools and prints the file on behalf of the client.
WebNFS
WebNFS is an NFS protocol that allows clients to access files over the
Internet in the same way that local files are accessed.
WebNFS uses a public
file handle that allows it to work across a firewall.
This public file handle
also reduces the amount of time required to initialize a connection.
The public
file handle is associated with a single directory (public) on the WebNFS server.
For more information, see the
exports
(4)exportfs
(2)nfs_intro
(4)
The Compact Disk Read-Only Memory File System (CDFS) is a local file system. Tru64 UNIX supports the ISO-9660 CDFS standard for data interchange between multiple vendors; the High Sierra Group standard for backward compatibility with earlier CD-ROM formats; and an implementation of the Rock Ridge Interchange Protocol (RRIP), Version 1.0, Revision 1.09.
The RRIP extends ISO-9660 system use areas to include multiple sessions, mixed-case and long file names; symbolic links; device nodes; deep directory structures; user IDs and group IDs and permissions on files; and POSIX timestamps.
Additionally, Tru64 UNIX supports the X/Open Preliminary Specification
(1991) CD-ROM Support Component (XCDR).
XCDR allows users to examine selected
ISO-9660 attributes through defined utilities and shared libraries.
XCDR also
allows system administrators to substitute different file protections, owners,
and file names for the default CD-ROM files.
For more information, see the
cdfs
(4)4.8 DVD File System
The Digital Versatile Disk (DVD) file system enables the reading of disks formatted in the Optical Storage Technology Association (OSTA) Universal Disk Format (UDF) specification. These interfaces conform to the ISO/ITEC 13346:1995 and ISO 9660:1988 standards.
User data sectors in a DVD-ROM can contain any type of data in any format. However, for Tru64 UNIX support through the DVDFS system, the OSTA UDF file format standard is mandatory. Additionally, DVD-ROM standards require that the logical sector size and the logical block (the user data block) size be 2048 bytes.
See the
dvdfs
(4)4.9 Memory File System
The Memory File System (MFS) is essentially a UNIX File System that resides in memory. No permanent file structures or data are written to disk, so the contents of an MFS are lost on system reboots, unmountings, or power failures. Because it does not write data to disk, the MFS is a very fast file system quite useful for storing temporary files or read-only files that are loaded into it after it is created.
For example, if you are building software that would have to be restarted if it failed, MFS is a good choice to use for storing the temporary files that are created during the build process, because the speed of MFS would reduce the build time.
Because Tru64 UNIX is a virtual memory system, the size of an MFS is not limited to free physical memory. The data in an MFS can be moved to the swap space if it is not actively accessed, thereby improving overall system performance. MFS is not supported in a TruCluster Server configuration.
For more information about MFS, see the
newfs
(8)4.10 The /proc File System
The
/proc
file system is a local file system that
enables running processes to be accessed and manipulated as files by the system
calls
open
,
close
,
read
,
write
,
lseek
, and
ioctl
.
The
/proc
file system is layered beneath the VFS;
it is a pseudo file system that occupies no actual disk space.
You can use
the
mount
and
unmount
commands to manually
mount and unmount the file system, or you can define an entry for it in the
/etc/fstab
file.
On a clustered system, each cluster member has its own
/proc
file system, which is accessible only by that member.
While the
/proc
file system is most useful for debuggers,
it enables any process with the correct permissions to control another running
process.
Thus, a parent/child relationship does not have to exist between
a debugger and the process being debugged.
For more information, see the
proc
(4)4.11 File-on-File Mounting File System
The File-on-File Mounting (FFM) file system allows regular, character, or block-special files to be mounted over regular files.
FFM is used, for the most part, by the Tru64 UNIX system calls
fattach
and
fdetach
of a STREAMS-based pipe (or
FIFO).
The two system calls are SVR4-compatible.
By using FFM, a FIFO,
which normally has no file system object associated with it, is given a name
in the file system space.
As a result, a process that is unrelated to the
process that created the FIFO can then access the FIFO.
In addition to programs using FFM through the
fattach
system call, users can mount one regular file on top of another by using the
mount
command.
Mounting a file on top of another file does not destroy
the contents of the covered file; it simply associates the name of the covered
file with the mounted file, thereby making the contents of the covered file
temporarily unavailable.
The covered file can be accessed only after the file
mounted on top of it is unmounted.
Note that the contents of the covered file are still available to any
process that had the file open at the time of the call to
fattach
or had the file open when a user issued a
mount
command that covered the file.
On clustered systems, an FFM file system is accessible only on the member on which it is mounted.
For more information on FFM, see the
ffm
(4)4.12 File Descriptor File System
The File Descriptor File System (FDFS) allows applications to reference
a process' open file descriptors as if they were files in UFS.
The association
is accomplished by aliasing a process's open file descriptors to file objects.
When FDFS is mounted, opening or creating a file descriptor file has the same
effect as using the
dup
system call.
On clustered systems, an FDFS is accessible only on the member on which it is mounted.
FDFS allows applications that were not written with support for UNIX
style I/O to use pipes, named pipes, and I/O redirection.
FDFS is not configured
into the Tru64 UNIX system; it must be mounted by command or must be
placed as an entry in the system's
file.
For more information on FDFS, see the
/etc/fstab
fd
(4)