Previous | Contents | Index |
LOAD_SYS_IMAGES controls the loading of system images described in the system image data file, VMS$SYSTEM_IMAGES. This parameter is a bit mask.
On VAX systems, the following bit is defined:
Bit | Description |
---|---|
0 (SGN$V_LOAD_SYS_IMAGES) | Enables loading alternate execlets specified in VMS$SYSTEM_IMAGES.DATA. |
On Alpha and I64 systems, the following bits are defined:
Bit | Description |
---|---|
0 (SGN$V_LOAD_SYS_IMAGES) | Enables loading alternate execlets specified in VMS$SYSTEM_IMAGES.DATA. |
1 (SGN$V_EXEC_SLICING) | Enables executive slicing. |
2 (SGN$V_RELEASE_PFNS) | Enables releasing unused portions of the Alpha and I64 huge pages. |
These bits are on by default. Using conversational bootstrap exec slicing can be disabled.
For simple timesharing systems, the default value is adequate. If your application uses many locks, as in the case of heavy RMS file sharing or a database management application, you should increase this parameter. When you change the value of LOCKIDTBL, examine the value of RESHASHTBL and change it if necessary.
The OpenVMS Lock Management facility is described in the HP OpenVMS Programming Concepts Manual. You can monitor locks with the MONITOR LOCK command of the Monitor utility.
This special parameter is used by HP and is subject to change. Do not change this parameter unless HP recommends that you do so.
On OpenVMS Version 8.3 systems, LOCKRMWT does not control lock remastering. See LOCKDIRWT. |
LOCKRMWT can have a value from zero to 10. The default is 5. Remaster decisions are based on the difference in lock remaster weights between the master and a remote node. When weights are equal, the remote node needs about 13% more activity before the tree is remastered. If a remote node has a higher lock remaster weight, the amount of activity is less. If the remote node has a lower lock remaster weight, the additional activity required to move the tree is much greater.
Lock remaster weights of zero and 10 have additional meanings. A value of zero indicates that a node does not want to master trees and always remasters to an interested node with a higher LOCKRMWT. Lock trees on an interested node with a LOCKRMWT lower than 10 are remastered to the node with a weight of 10 for LOCKRMWT.
Other MAXBOB* parameters are obsolete beginning with OpenVMS Version 7.3.
The number of bytes specified in the I/O request plus the size of a driver-dependent and function-dependent header area determine the required buffered I/O packet size. The size of the header area is a minimum of 16 bytes; there is no absolute upper limit. However, this header area is usually a few hundred bytes in size.
On OpenVMS VAX systems beginning with Version 7.1, the default value is 4112. The default value on Alpha and I64 systems continues to be 8192.
The maximum value of MAXBUF is 64000 bytes.
This special parameter is used by HP and is subject to change. Do not change this parameter unless HP recommends that you do so.
The default value is normally configured to allow you to create the desired number of processes. If the following message appears, you need to increase the value of MAXPROCESSCNT:
%SYSTEM-F-NOSLOT, No PCB to create process |
On Alpha and I64 systems beginning with Version 8.1, the default value is 32,767.
MAXQUEPRI refers to relative queue scheduling priority, not to the execution priority of the job. |
A value of 1 causes other nodes in the MEMORY CHANNEL cluster to crash with bugcheck code MC_FORCED_CRASH if this node bugchecks or shuts down.
The default value is 0. A setting of 1 is intended only for debugging purposes; the parameter should otherwise be left at its default value.
PMDRIVER is a driver that serves as the MEMORY CHANNEL cluster port driver. It works together with MCDRIVER (the MEMORY CHANNEL device driver and driver interface) to provide MEMORY CHANNEL clustering. If PMDRIVER is not loaded, cluster connections are not made over the MEMORY CHANNEL interconnect.
The default value is 1, which causes PMDRIVER to be loaded when you boot the system. When you run CLUSTER_CONFIG.COM and select the MEMORY CHANNEL option, PMDRIVER is loaded automatically when you reboot the system.
HP recommends that this value not be changed. This parameter value must be the same on all nodes connected by MEMORY CHANNEL.
The default value is 800. HP recommends that this value not be changed. This parameter value must be the same on all nodes connected by MEMORY CHANNEL.
The default value is 200. HP recommends that this value not be changed. This parameter value must be the same on all nodes connected by MEMORY CHANNEL.
The default value is 992. This value is suitable in all cases except for systems with highly constrained memory. For such systems, you can reduce the memory consumptions of MEMORY CHANNEL by slightly reducing the default value of 992. The value of MC_SERVICES_P6 must always be equal to or greater than the result of the following calculations:
The value of MC_SERVICES_P6 must be the same on all nodes connected by MEMORY CHANNEL.
The default value is 0. HP recommends that this value not be changed except while debugging MEMORY CHANNEL problems or adjusting the MC_SERVICES_P9 parameter.
Note that MC_SERVICES_P9 is not a dynamic parameter; you must reboot the system after each change for that change to take effect.
The default value is 150. HP recommends that this value not be changed.
The value of MC_SERVICES_P9 must be the same on all nodes connected by MEMORY CHANNEL.
This special parameter is used by HP and is subject to change. Do not change this parameter unless HP recommends that you do so.
On VAX systems, MINWSCNT sets the minimum number of fluid pages (pages not locked in the working set) required for the execution of a process. The value of MINWSCNT must provide sufficient space to execute any VAX instruction. Theoretically, the longest instruction requires 52 pages; however, all code can run with 20 fluid pages. An insufficient value may inhibit system performance or even put a process into an infinite loop on some instructions.
On Alpha and I64 systems, MINWSCNT sets the minimum number of pages required for the execution of a process. The default value is 20; the minimum value is 10.
The first two bits, 0 and 1, control the proactive memory reclamation mechanisms. Bit 2 controls deferred memory testing.
The following bit mask values are defined:
Bit | Description |
---|---|
0 | If this bit is set, reclamation is enabled by trimming from periodically executing, but otherwise idle, processes. This occurs when the size of the free list plus the modified list drops below two times the value of FREEGOAL. This function is disabled if the bit is clear. |
1 | If this bit is set, reclamation is enabled by outswapping processes that have been idle for longer than LONGWAIT seconds. This occurs when the size of the free list drops below FREEGOAL. This function is disabled if the bit is clear. |
2 |
Controls deferred memory testing (only on AlphaServer 4100 systems).
You can use this bit to speed up elapsed bootstrap time by controlling
when memory is tested:
|
3 | Reserved to OpenVMS use; must be zero. |
4 | If this bit is clear (the default), all page sizes supported by hardware can be used to map resident memory sections on I64 systems. If this bit is set, page sizes on I64 systems are limited to the maximum GH factor available on Alpha systems (512 * < system page size>). |
5-7 | Reserved for future use. |
MPDEV_POLLER must be set to ON to enable automatic failback. You can disable automatic failback without disabling the poller by setting MPDEV_AFB_INTVL to 0. The default is 300 seconds.
MPDEV_REMOTE and MPDEV_AFB_INTVL have no effect when MPDEV_ENABLE is set to OFF.
To use multipath failover to a served path, MPDEV_REMOTE must be enabled on all systems that have direct access to shared SCSI/Fibre Channel devices. The first release to provide this feature is OpenVMS Alpha Version 7.3--1. Therefore, all nodes on which MPDEV_REMOTE is enabled must be running OpenVMS Alpha Version 7.3--1 (or later).
If MPDEV_ENABLE is set to OFF (0), the setting of MPDEV_REMOTE has no effect because the addition of all new paths to multipath sets is disabled. The default is ON.
If MPW_HILIMIT is too low, excessive page faulting can occur from the page file. If it is too high, too many physical pages can be consumed by the modified-page list.
If you increase MPW_HILIMIT, you might also need to increase MPW_WAITLIMIT. Note that if MPW_WAITLIMIT is less than MPW_HILIMIT, a system deadlock occurs. The values for the two parameters are usually equal.
MPW_LOLIMIT ensures that a certain number of pages are available on the modified-page list for page faults. If the number is too small, the caching effectiveness of the modified-page list is reduced. If it is too high, less memory is available for processes, so that swap (and page) may increase.
This special parameter is used by HP and is subject to change. Do not change this parameter unless HP recommends that you do so.
If MPW_WRTCLUSTER is too small, it takes many I/O operations to empty the modified-page list. If MPW_WRTCLUSTER is too large for the speed of the disk that holds the page file, other I/O operations are held up for the modified-page list write.
On VAX systems, the MPW_WRTCLUSTER default value and maximum value is 120 512-byte pages; its minimum value is 16 512-byte pages.
On Alpha and I64 systems, the MPW_WRTCLUSTER default value is 64 8192-byte pages; its maximum value is 512 8192-byte pages; and its minimum value is 16 8192-byte pages.
On VAX systems, MSCP_BUFFER specifies the number of pages to be allocated to the MSCP server's local buffer area.
On Alpha and I64 systems, MSCP_BUFFER specifies the number of pagelets to be allocated to the MSCP server's local buffer area.
The MSCP_CMD_TMO default value of 0 is normally adequate. A value of 0 provides the same behavior as in previous releases of OpenVMS (which did not have an MSCP_CMD_TMO system parameter). A nonzero setting increases the amount of time before an MSCP command times out.
If command timeout errors are being logged on client nodes, setting the parameter to a nonzero value on OpenVMS servers reduces the number of errors logged. Increasing the value of this parameter reduces the numb client MSCP command timeouts and increases the time it takes to detect faulty devices.
If you need to decrease the number of command timeout errors, HP recommends that you set an initial value of 60. If timeout errors continue to be logged, you can increase this value in increments of 20 seconds.
The default value is currently 32. Unless a system has very constrained memory available, HP recommends that these values not be increased.
Value | Description |
---|---|
0 | Do not load the MSCP server. This is the default value. |
1 | Load the MSCP server and serve disks as specified by the MSCP_SERVE_ALL parameter. |
Starting with OpenVMS Version 7.2, the serving types are implemented as a bit mask. To specify the type of serving your system will perform, locate the type you want in the following table and specify its value. For some systems, you may want to specify two serving types, such as serving the system disk and serving locally attached disks. To specify such a combination, add the values of each type, and specify the sum.
In a mixed-version cluster that includes any systems running OpenVMS Version 7.1-x or earlier, serving all available disks is restricted to serving all disks except those whose allocation class does not match the system's node allocation class (prior to Version 7.2). To specify this type of serving, use the value 9 (which sets bit 0 and bit 3).
The following table describes the serving type controlled by each bit and its decimal value:
Bit and Value When Set |
Description |
---|---|
Bit 0 (1) | Serve all available disks (locally attached and those connected to HS x and DSSI controllers). Disks with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are also served if bit 3 is not set. |
Bit 1 (2) | Serve locally attached (non-HS x and DSSI) disks. |
Bit 2 (4) | Serve the system disk. This is the default setting. This setting is important when other nodes in the cluster rely on this system being able to serve its system disk. This setting prevents obscure contention problems that can occur when a system attempts to complete I/O to a remote system disk whose system has failed. |
Bit 3 (8) |
Restrict the serving specified by bit 0. All disks except those with
allocation classes that differ from the system's allocation class (set
by the ALLOCLASS parameter) are served.
This is pre-Version 7.2 behavior. If your cluster includes systems running OpenVMS 7.1- x or earlier, and you want to serve all available disks, you must specify 9, the result of setting this bit and bit 0. |
Although the serving types are now implemented as a bit mask, the values of 0, 1, and 2, specified by bit 0 and bit 1, retain their original meanings:
0 --- Do not serve any disks (the default for earlier versions of OpenVMS).
1 --- Serve all available disks.
2 --- Serve only locally attached (non-HSx and non-DSSI) disks.
If the MSCP_LOAD system parameter is 0, MSCP_SERVE_ALL is ignored.
Specify one of the following values:
Value | Description |
---|---|
0 | Load the uniprocessing synchronization image SYSTEM_SYNCHRONIZATION_UNI.EXE. |
1 | If the CPU type is capable of SMP and two or more CPUs are present on the system, load the full-checking multiprocessing synchronization image SYSTEM_SYNCHRONIZATION.EXE. Otherwise, load the uniprocessing synchronization image SYSTEM_SYNCHRONIZATION_UNI.EXE. |
2 | Always load the full-checking version SYSTEM_SYNCHRONIZATION.EXE, regardless of system configuration or CPU availability. |
3 |
If the CPU type is capable of SMP and two or more CPUs are present on
the system, load the optimized streamlined multiprocessing image:
Otherwise, load the uniprocessing synchronization image SYSTEM_SYNCHRONIZATION_UNI.EXE. The default value is 3. |
4 | Always load the streamlined multiprocessing image SYSTEM_SYNCHRONIZATION_MIN.EXE, regardless of system configuration or CPU availability. |
Setting the SYSTEM_CHECK parameter to 1 has the effect of setting MULTIPROCESSING to 2.
Value | Description |
---|---|
0 | Both Thread Manager upcalls and the creation of multiple kernel threads are disabled. |
1 | Thread Manager upcalls are enabled; the creation of multiple kernel threads is disabled. |
2-256 (Alpha and I64) | Both Thread Manager upcalls and the creation of multiple kernel threads are enabled. The number specified represents the maximum number of kernel threads that can be created for a single process. |
Previous | Next | Contents | Index |