Previous | Contents | Index |
The NFS client supports shared mounting by using the /SHARE qualifier with the MOUNT command. Any user can mount a file system using the /SHARE qualifier---SYSNAM or GRPNAM privileges are not required. The /SHARE qualifier places the logical name in the job logical name table and increments the volume mount count, regardless of the number of job mounts. When the job logs out, all job mounts are dismounted, which causes the volume mount count to be decremented.
The following example illustrates how to specify a shared mount:
TCPIP> MOUNT DNFS1: /HOST=BART /PATH="/DKA100/ENG" TCPIP> MOUNT DNFS1: /HOST=BART /PATH="/DKA100/ENG" /SHARE |
This mount request increments the mount count by 1. You must specify the /SHARE qualifier with the same host name and path as used in the initial mount to ensure that the mount is seen as a shared mount instead of as a new mount request.
With a shared mount, the mount requests increment the mount count by 1 under the following circumstances:
In this way, if the main process of the job logs out, the job mount is deallocated, and the volume mount count decrements by 1 (if zero, the device is dismounted). OpenVMS handles dismounting differently based on whether you use the TCP/IP management command DISMOUNT or the DCL command DISMOUNT. These differences are as follows:
SYSTEM-F-NO-PRIVILEGE, operation requires privilege |
%DISM-W-CANNOTDMT, NFSn: cannot be dismounted %SYSTEM -F -DEVNOTMOUNT, device is not mounted |
Consider the mount counts in the following sample MOUNT/DISMOUNT sequence:
The original mount for BART "/ENG" on DNFS1:[A], along with its shared
mount, is dismounted. The subsequent DISMOUNT commands dismount
examples 3 and 4, leaving nothing mounted.
23.4.2 Automounting
Automounting allows you to mount a remote file system on an as-needed basis. This means that the client automatically and transparently mounts a remote server path as soon as the user accesses the path name.
Automounting is convenient for file systems that are inactive for large periods of time. When a user on a system invokes a command to access a remote file or directory, the automount daemon mounts the file and keeps it mounted as long as the user needs it. When a specified amount of time elapses without the file being accessed, it is dismounted. You can specify an inactivity period (5 minutes is the default), after which the software automatically dismounts the path.
You specify automounting and an inactivity interval with the qualifier /AUTOMOUNT=INACTIVITY:OpenVMS_delta_time.
The inactivity interval is the maximum inactive period for the mount attempt. When this period expires, the NFS client dismounts the path name as described below.
In this example, the client automounts directory /usr/webster residing on host robin onto the OpenVMS mount point DNFS67:. When it references the path name, the client keeps the path mounted unless it reaches an inactive period of 10 minutes, after which it dismounts the file system. With subsequent references, the client remounts the file system. For example:
TCPIP> MOUNT DNFS67: /HOST="robin" - _TCPIP> /PATH="/usr/webster" /AUTOMOUNT=INACTIVITY=00:10:00 |
Background mounting allows you to retry a file system mount that initially failed. For example, you may have set mount points in your system startup command file so they are automatically mounted every time your system reboots. In this scenario, if the server is unavailable (because, for example, the server is also rebooting), the mount requests fail. With background option set, the client continues to try the mount after the initial failure. The client continues trying up to 10 times at 30-second intervals (default) or for the number of retries and interval you specify.
If you specify background mounting, you should also use the /RETRIES qualifier with a small nonzero number. This qualifier sets the number of times the transaction itself should be retried. Specify background mounting, along with the desired delay time and retry count parameters, with the qualifier /BACKGROUND=[DELAY:OpenVMS_delta_time,RETRY:n].
For example, the following command attempts to mount in background mode, on local device DNFS4:, the file system /flyer , which physically resides on host migration . If the mount fails, the NFS client waits 1 minute and then retries the connection up to 20 times. For example:
TCPIP> MOUNT DNFS4: /HOST="migration" /PATH="/flyer" - _TCPIP> /BACKGROUND=(DELAY:00:01:00, RETRY:20) /RETRIES=4 |
If you use the /BACKGROUND qualifier, HP strongly recommends
that you also use the /RETRIES qualifier specifying a nonzero value. If
you use the default value for /RETRIES (zero), the first mount attempt
can never complete except by succeeding, and the process doing the
mount will hang until the server becomes available.
23.4.4 Overmounting
Overmounting allows you to mount another path onto an existing mount point. Specify overmounting with the /FORCE qualifier. The client dismounts the original mount point and replaces it with a new one.
Mounting a higher or lower directory level in a previously used path is also an overmount. For example, an overmount occurs when you execute two MOUNT commands in the following order:
TCPIP> MOUNT DNFS123:[USERS.MNT] /HOST="robin" /PATH="/usr" %DNFS-S-MOUNTED, /usr mounted on _DNFS123:[USERS.MNT] TCPIP> MOUNT DNFS123:[USERS.MNT] /HOST="robin" /PATH="/usr/tern" /FORCE %DNFS-S-REMOUNTED, _DNFS123:[USERS.MNT] remounted as /usr/tern on ROBIN |
The second MOUNT command specifies a lower level in the server path.
This constitutes another path name and qualifies for an overmount.
23.4.5 Occluded Mounting
Occluded mounting allows you to mount a file system onto a client mount point that is higher or lower in the directory structure than an existing, active mount. This is different from overmounting because dismounting does not occur. Instead, the client occludes (hides from view) the subdirectories that are added to or dropped from the original mount specification when you perform a directory listing.
Specify the /FORCE qualifier with an occluded mount.
In the following example, the mount point specification was backed up one subdirectory from the previous one. If you enter the SHOW MOUNT command, both mounts are visible. However, if you enter DIRECTORY for DNFS2:[USERS.SPARROW], [.MNT] is no longer visible. To make this subdirectory visible again, issue the DISMOUNT command to dismount DNFS2:[USERS.SPARROW].
TCPIP> MOUNT DNFS2:[USERS.SPARROW.MNT] /HOST="birdy" /PATH="/usr" %DNFS-S-MOUNTED, /usr mounted on _DNFS2:[USERS.SPARROW.MNT] TCPIP> MOUNT DNFS2:[USERS.SPARROW] /HOST="birdy" /PATH="/usr" /FORCE %DNFS-S-MOUNTED, /usr mounted on _DNFS2:[USERS.SPARROW] -TCPIP-I-OCCLUDED, previous contents of _DNFS2:[USERS.SPARROW] occluded |
The following example shows a mount of UNIX directory /usr to the OpenVMS device and directory DNFS3:[0,0].
On the UNIX host, the directory listing looks like this:
unix% ls grebe wings pratincole |
To do the mount, enter:
$ TCPIP MOUNT DNFS3: /HOST="unix" /PATH="/usr" |
To check that the mount succeeded, enter:
$ TCPIP SHOW MOUNT DNFS3: /FULL . . . |
On the OpenVMS host, the directory listing looks like this:
$ DIRECTORY [0,0] Directory DNFS3:[000,000] GREBE.DIR;1 WINGS.DIR;1 PRATINCOLE.DIR;1 Total of 3 files. |
Part 6 describes how to set up and manage the printing services available with TCP/IP Services, and includes the following chapters:
The LPR/LPD facility allows other network hosts to access printers on the server system and provides local access to printers on remote hosts. Remote print server and the client hosts must run Version 4.2 or later of the Berkeley Software Distribution line printer spooler software ( lpd ) to interoperate with TCP/IP Services LPR/LPD.
This chapter reviews key concepts and describes:
The LPR/LPD facility has both a client component (LPR) and a server component (LPD), both of which are partially included in an OpenVMS queue symbiont. The client is activated when you use one of the following commands:
For general information about using these commands, see the HP TCP/IP Services for OpenVMS User's Guide.
The server is activated when a remote user submits a print job to a printer configured on the OpenVMS server. The LPD server consists of two components:
The same LPD symbiont image is used for both client and server. It acts as the client on queues set up for remote printers, and it acts as the server on the local LPD queue.
The LPD uses the printcap database to process print requests. The printcap database, located in SYS$SPECIFIC:[TCPIP$LPD]:TCPIP$PRINTCAP.DAT, is an ASCII file that defines the print queues. The printcap entries are similar in syntax to the entries in a UNIX /etc/printcap file.
Use the printer setup program LPRSETUP to configure or modify printers.
The setup program creates spool directories and log files based on the
information you supply. Section 24.5 describes how to use the printer
setup program to configure printers.
24.2 Configuring LPD for IPv6 Support
IPv6 support is automatically allowed for LPD on systems where LPD is already enabled. On systems where LPD is not already enabled, IPv6 will be set when the LPD service is configured.
The IPv6 support is indicated by the IPV6 flag in the LPD service database entry. For example:
TCPIP>SHOW SERVICE LPD/FULL Service: LPD ... Flags: Listen IPv6 ... |
If LPD is configured already, the LPD service database entry is
automatically updated to have the IPV6 flag set. If not, then the flag
is set when LPD is configured with the TCPIP$CONFIG procedure.
24.3 Configuring LPR/LPD
When you enable the LPD service, the TCPIP$CONFIG.COM command procedure:
You can use the TCPIP$LPD_CONF.TEMPLATE file to create a TCPIP$LPD.CONF configuration file, which allows you to change the way the LPD facility operates. For guidelines about specifying configuration options in the LPD.CONF file, see Section 1.1.5.
After you modify the TCPIP$LPD.CONF file, you must stop and restart LPD using the TCPIP$LPD_STARTUP.COM and TCPIP$LPD_SHUTDOWN.COM command procedures.
Table 24-1 describes the configuration options.
Configuration Option | Description |
---|---|
1st-VFC-Prefix-Special | Specifies not to insert an extra line-feed character at the beginning of print files. |
Droptime |
Indicates how long after repeated timeouts a connection should be
maintained before closing the connection. The value is specified in
seconds.
The Drop timer is in effect only after the link has been established, and it takes effect only if the Keepalive configuration option is set. The default value for the Drop timer is 300 seconds. |
Idle-Timeout | Specifies the length of time for the LPD server to wait for an incoming LPD connection, in OpenVMS delta time format. The default is 5 minutes. This behavior requires that the Persistent-Server option be specified. |
Inbound-Queues-Per-Node | Specifies the number of inbound execution queues to create for each cluster node when the LPD server starts. The default is 1. |
Keepalive | Specifies the number of seconds to wait before checking the other end of a link that appears to be idle. The Keepalive timer detects when a remote host has failed or has been brought down, or when the logical connection has been broken. |
Loop-Max | Specifies the maximum number of times the LPD server should retry a connection. The default is no maximum (the same as setting this option to 0). This behavior requires that the Persistent-Server option be specified. |
Persistent-Server | Enables the persistence of the LPD server. This behavior is disabled by default. |
Probetime |
Specifies the number of seconds to wait before timing out the
connection.
The value of the Probetime option must always be less than or equal to the value of the Droptime option. The default value for the Probetime option is 75 seconds. The Probe timer controls:
|
PS-Extensions | Controls HP PrintServer extension support. By default, PrintServer extensions are supported by LPD. To disable support, specify the NON_PS keyword to this option. To enable support, specify the LPS keyword. |
Retry-Interval | Specifies the amount of time to wait before requeuing a print job that failed because of a soft error, such as the loss of the TCP connection. The default is 5 minutes ( 0 00:05:00.00 ). |
Retry-Maximum | Specifies the OpenVMS delta time for which the LPD symbiont will continue to requeue a print job that has failed with a soft error. The default is 1 hour ( 0 01:00:00.00 ). |
Setup-NoLF | By default, the LPD server inserts a line feed into the byte stream after the SETUP module and before the actual print file. This option allows you to control this behavior. To prevent LPD from inserting line-feed characters, set this option to TRUE. For information about controlling this behavior using the printcap file, see Section 24.5. |
Stream-Passall | Controls whether LPD will add extra line feed characters to files with embedded carriage control (the default). Set this option to preserve the behavior of previous versions of TCP/IP Services. This is useful when your users print from HP PATHWORKS Client software. |
Utility-Queues-Per-Node | Specifies the number of outbound execution queues to create for each cluster node when the LPD server starts. The default is 0. |
Synchronize-All-Jobs |
Controls whether the the LPD print symbiont process running in an
inbound execution queue (TCPIP$LPD_IN_
nodename_
nn) will synchronize on the completion of each job that it
submits to a final destination print queue.
If the LPD service log option LOGOUT is set using the TCP/IP management command SET SERVICE/LOG, when a print job submitted by the symbiont process completes, the LPD server synchronizes and sends an OPCOM message containing the job number, queue name, and user and host names of the submitter. Each synchronization causes the consumption of one slot of the symbiont process's AST quota and some dynamic memory. If many jobs submitted by an LPD symbiont process are pending (for example, because the print queue to which they were submitted has been stopped), the symbiont process can exhaust its AST quota or virtual memory. If the Synchronize-All-Jobs option is set to FALSE, synchronization occurs only for print jobs that have either an LPD mailback completion notice or a temporary layup file sent from the LPD client to be used in the printing of the job. Setting this option to FALSE helps limit the exhaustion of dynamic memory or AST quota when many print jobs are outstanding, because most print jobs do not use mailback completion (/PARAMETERS=MAIL) or layup files (/PARAMETERS=LAYUP_DEFINITION). The default setting for the Synchronize-All-Jobs option is TRUE, which is appropriate for most sites. Systems with heavy inbound processing across many print queues might need to set this option to FALSE. |
VMS-Flagpages | Enables the OpenVMS flag-page print options described in Section 24.11. |
Symbiont-Debug | Writes diagnostics to the LPD queue log file. Applies to outbound jobs (LPD client) and to inbound jobs (LPD server) that are processed by the LPD symbiont controlling the local print queue. See Section 24.12 for more information. |
Receiver-Debug | Writes diagnostics to the receiver log file TCPIP$LPD_RCV_LOGFILE.LOG. Applies to inbound jobs (LPD server) from the time they are received from the remote host over the network to the time they are queued to the local print queue for processing by the LPD print symbiont. See Section 24.12 for more information. |
Previous | Next | Contents | Index |