|
HP OpenVMS systems documentation |
Previous | Contents | Index |
Beginning with OpenVMS Version 8.2 on Alpha and I64 systems, the lock value block has been extended from 16 to 64 bytes. To use this feature, applications must explicitly specify both the LCK$M_XVALBLK flag and the LCK$M_VALBLK flag and provide a 64-byte buffer when reading and writing the value block.
Existing applications that use the 16-byte buffer and the LCK$M_VALBLK flag continue to operate without modifications, even when interacting with applications that use the 64-byte lock value block.
In your design of an application using the extended lock value block, you may or may not have to take interoperability into account. If your new application uses only completely new resource names in a completely new resource tree that is never referenced by an old application, from a version of OpenVMS prior to Version8.2, or from a VAX node, then you need not worry about interoperability.
If this is not the case, your design may need to take into account the possibility that the lock value block will be marked invalid as a result of interoperability. There are three situations in which the extended lock value block can be marked invalid:
The SS$_XVALNOTVALID condition value is a warning message, not an error message; therefore, the $ENQ service grants the requested lock and returns this warning on all subsequent calls to $ENQ until an application writes the value block with the LCK$M_XVALBLK flag set. SS$_XVALNOTVALID is fully described in the description of the $ENQ System Service in the HP OpenVMS System Services Reference Manual: A--GETUAI manual.
If the entire lock status block is invalid, the SS$_VALNOTVALID status
is returned and overrides SS$_XVALNOTVALID status.
7.5 Dequeuing Locks
When a process no longer needs a lock on a resource, you can dequeue the lock by using the Dequeue Lock Request (SYS$DEQ) system service. Dequeuing locks means that the specified lock request is removed from the queue it is in. Locks are dequeued from any queue: Granted, Waiting, or Conversion (see Section 7.2.6). When the last lock on a resource is dequeued, the lock management services delete the name of the resource from its data structures.
The four arguments to the SYS$DEQ macro (lkid, valblk, acmode, and flags) are optional. The lkid argument allows the process to specify a particular lock to be dequeued, using the lock identification returned in the lock status block.
The valblk argument contains the address of a 16-byte lock value block or, if LKC$M_XVALBLK is specified on Alpha or I64 systems, the 64-byte lock value block. If the lock being dequeued is in protected write or exclusive mode, the contents of the lock value block are stored in the lock value block associated with the resource. If the lock being dequeued is in any other mode, the lock value block is not used. The lock value block can be used only if a specific lock is being dequeued. It may not be used when the LCK$M_DEQALL flag is specified.
Three flags are available:
The following is an example of dequeuing locks:
#include <stdio.h> #include <descrip.h> #include <lckdef.h> /* Declare a lock status block */ struct lock_blk { unsigned short lkstat ,reserved; unsigned int lock_id; }lksb; . . . void read_updates(); unsigned int status, lkmode=LCK$K_CRMODE, lkid; $DESCRIPTOR(resnam,"STRUCTURE_1"); /* resource */ /* Queue a request for concurrent read mode lock */ status = SYS$ENQW(0, /* efn - event flag */ lkmode, /* lkmode - lock mode */ &lksb, /* lksb - lock status block */ 0, /* flags */ &resnam, /* resnam - name of resource */ 0, /* parid - lock id of parent */ &read_updates,/* astadr - AST routine */ 0, 0, 0, 0); if((status & 1) != 1) LIB$SIGNAL(status); . . . lkid = lksb.lock_id; status = SYS$DEQ( lkid, /* lkid - id of lock to be dequeued */ 0, 0, 0); if((status & 1) != 1) LIB$SIGNAL(status); } |
User-mode locks are automatically dequeued when the image exits.
7.6 Local Buffer Caching with the Lock Management Services
The lock management services provide methods for applications to
perform local buffer caching (also called distributed
buffer management). Local buffer caching allows a number of processes
to maintain copies of data (disk blocks, for example) in buffers local
to each process and to be notified when the buffers contain invalid
data because of modifications by another process. In applications where
modifications are infrequent, substantial I/O can be saved by
maintaining local copies of buffers. You can use either the lock value
block or blocking ASTs (or both) to perform buffer caching.
7.6.1 Using the Lock Value Block
To support local buffer caching using the lock value block, each process maintaining a cache of buffers maintains a null mode lock on a resource that represents the current contents of each buffer. (For this discussion, assume that the buffers contain disk blocks.) The value block associated with each resource is used to contain a disk block "version number." The first time a lock is obtained on a particular disk block, the current version number of that disk block is returned in the lock value block of the process. If the contents of the buffer are cached, this version number is saved along with the buffer. To reuse the contents of the buffer, the null lock must be converted to protected read mode or exclusive mode, depending on whether the buffer is to be read or written. This conversion returns the latest version number of the disk block. The version number of the disk block is compared with the saved version number. If they are equal, the cached copy is valid. If they are not equal, a fresh copy of the disk block must be read from disk.
Whenever a procedure modifies a buffer, it writes the modified buffer
to disk and then increments the version number before converting the
corresponding lock to null mode. In this way, the next process that
attempts to use its local copy of the same buffer finds a version
number mismatch and must read the latest copy from disk rather than use
its cached (now invalid) buffer.
7.6.2 Using Blocking ASTs
Blocking ASTs notify processes with granted locks that another process with an incompatible lock mode has been queued to access the same resource.
Blocking ASTs support local buffer caching in two ways. One technique
involves deferred buffer writes; the other technique is an alternative
method of local buffer caching without using value blocks.
7.6.2.1 Deferring Buffer Writes
When local buffer caching is being performed, a modified buffer must be
written to disk before the exclusive mode lock can be released. If a
large number of modifications are expected (particularly over a short
period of time), you can reduce disk I/O by both maintaining the
exclusive mode lock for the entire time that the modifications are
being made and by writing the buffer once. However, this prevents other
processes from using the same disk block during this interval. This
problem can be avoided if the process holding the exclusive mode lock
has a blocking AST. The AST notifies the process if another process
needs to use the same disk block. The holder of the exclusive mode lock
can then write the buffer to disk and convert its lock to null mode
(thereby allowing the other process to access the disk block). However,
if no other process needs the same disk block, the first process can
modify it many times but write it only once.
7.6.2.2 Buffer Caching
To perform local buffer caching using blocking ASTs, processes do not
convert their locks to null mode from protected read or exclusive mode
when finished with the buffer. Instead, they receive blocking ASTs
whenever another process attempts to lock the same resource in an
incompatible mode. With this technique, processes are notified that
their cached buffers are invalid as soon as a writer needs the buffer,
rather than the next time the process tries to use the buffer.
7.6.3 Choosing a Buffer-Caching Technique
The choice between using either version numbers or blocking ASTs to perform local buffer caching depends on the characteristics of the application. An application that uses version numbers performs more lock conversions; whereas one that uses blocking ASTs delivers more ASTs. Note that these techniques are compatible; some processes can use one technique, and other processes can use the other at the same time. Generally, blocking ASTs are preferable in a low-contention environment; whereas version numbers are preferable in a high-contention environment. You can even invent combined or adaptive strategies.
In a combined strategy, the applications use specific techniques. If a process is expected to reuse the contents of a buffer in a short amount of time, the application uses blocking ASTs; if there is no reason to expect a quick reuse, the application uses version numbers.
In an adaptive strategy, an application makes evaluations based on the rate of blocking ASTs and conversions. If blocking ASTs arrive frequently, the application changes to using version numbers; if many conversions take place and the same cached copy remains valid, the application changes to using blocking ASTs.
For example, suppose one process continually displays the state of a database, while another occasionally updates it. If version numbers are used, the displaying process must always make sure that its copy of the database is valid (by performing a lock conversion); if blocking ASTs are used, the display process is informed every time the database is updated. On the other hand, if updates occur frequently, the use of version numbers is preferable to continually delivering blocking ASTs.
7.7 Example of Using Lock Management Services
The following program segment requests a null lock for the resource
named TERMINAL. After the lock is granted, the program requests that
the lock be converted to an exclusive lock. Note that, after SYS$ENQW
returns, the program checks both the status of the system service and
the condition value returned in the lock status block to ensure that
the request completed successfully.
! Define lock modes INCLUDE '($LCKDEF)' ! Define lock status block INTEGER*2 LOCK_STATUS, 2 NULL INTEGER LOCK_ID COMMON /LOCK_BLOCK/ LOCK_STATUS, 2 NULL, 2 LOCK_ID . . . ! Request a null lock STATUS = SYS$ENQW (, 2 %VAL(LCK$K_NLMODE), 2 LOCK_STATUS, 2 , 2 'TERMINAL', 2 ,,,,,) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) IF (.NOT. LOCK_STATUS) CALL LIB$SIGNAL (%VAL(LOCK_STATUS)) ! Convert the lock to an exclusive lock STATUS = SYS$ENQW (, 2 %VAL(LCK$K_EXMODE), 2 LOCK_STATUS, 2 %VAL(LCK$M_CONVERT), 2 'TERMINAL', 2 ,,,,,) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) IF (.NOT. LOCK_STATUS) CALL LIB$SIGNAL (%VAL(LOCK_STATUS)) |
To share a terminal between a parent process and a subprocess, each process requests a null lock on a shared resource name. Then, each time one of the processes wants to perform terminal I/O, it requests an exclusive lock, performs the I/O, and requests a null lock.
Because the lock manager is effective only between cooperating programs, the program that created the subprocess should not exit until the subprocess has exited. To ensure that the parent does not exit before the subprocess, specify an event flag to be set when the subprocess exits (the num argument of LIB$SPAWN). Before exiting from the parent program, use SYS$WAITFR to ensure that the event flag has been set. (You can suppress the logout message from the subprocess by using the SYS$DELPRC system service to delete the subprocess instead of allowing the subprocess to exit.)
After the parent process exits, a created process cannot synchronize access to the terminal and should use the SYS$BRKTHRU system service to write to the terminal.
This chapter describes the use of asynchronous system traps (ASTs). It contains the following sections:
Section 8.1 provides an overview of AST routines.
Section 8.2 provides information about declaring and queuing ASTs.
Section 8.3 describes common asynchronous programming mistakes.
Section 8.4 provides information about using system services for AST event and time delivery.
Section 8.5 describes access modes for ASTs.
Section 8.6 provides information about calling ASTs.
Section 8.7 provides information about delivering ASTs.
Section 8.8 describes ASTs and process wait states.
Section 8.9 presents code examples of how to use AST services.
8.1 Overview of AST Routines
Asynchronous system traps (ASTs) are interrupts that occur asynchronously (out of sequence) with respect to the process's execution. ASTs are activated asynchronously to the mainline code in response to an event, usually either as a timer expiration or an I/O completion. An AST provides a transfer of control to a user-specified procedure that handles the event. For example, you can use ASTs to signal a program to execute a routine whenever a certain condition occurs.
The routine executed upon delivery of an AST is called an AST routine. AST routines are coded and referenced like any other routine; they are compiled and linked in the normal fashion. An AST routine's code must be reentrant. When the AST routine is finished, the routine that was interrupted resumes execution from the point of interruption.
ASTs provide a powerful programming technique. By using ASTs, you allow other processing to continue pending the occurrence of one or more events. Polling and blocking techniques, on the other hand, can use resoures inefficiently. A polling technique employs a looping that polls for an event, which has to wait for an indication that an event has occured. Therefore, depending on the frequency of the polling, polling techniques waste resources. If you use less frequent intervals, polling can then be slow to react to the occurrence of the event.
Blocking techniques force all processing to wait for the completion of a particular event. Blocking techniques can also be wasteful, for there could well be other activities the process could be performing while waiting for the occurrence of a specific event.
To deliver an AST, you use system services that specify the address of the AST routine. Then the system delivers the AST (that is, transfers control to your AST routine) at a particular time or in response to a particular event.
Some system services allow a process to request that it be interrupted when a particular event occurs. Table 8-1 shows the system services that are AST services.
System Service | Task Performed |
---|---|
SYS$SETAST | Enable or disable reception of AST requests |
SYS$DCLAST | Declare AST |
The system services that use the AST mechanism accept as an argument the address of an AST service routine, that is, a routine to be given control when the event occurs.
Table 8-2 shows some of the services that use ASTs.
System Service | Task Performed |
---|---|
SYS$DCLAST | Declare AST |
SYS$ENQ | Enqueue Lock Request |
SYS$GETDVI | Get Device/Volume Information |
SYS$GETJPI | Get Job/Process Information |
SYS$GETSYI | Get Systemwide Information |
SYS$QIO | Queue I/O Request |
SYS$SETIMR | Set Timer |
SYS$SETPRA | Set Power Recovery AST |
SYS$UPDSEC | Update Section File on Disk |
The following sections describe in more detail how ASTs work and how to
use them.
8.2 Declaring and Queuing ASTs
Most ASTs occur as the result of the completion of an asynchronous event that is initiated by a system service (for example, a SYS$QIO or SYS$SETIMR request) when the process requests notification by means of an AST.
The Declare AST (SYS$DCLAST) system service can be called to invoke a subroutine as an AST. With this service, a process can declare an AST only for the same or for a less privileged access mode.
You may find occasional use for the SYS$DCLAST system service in your programming applications; you may also find the SYS$DCLAST service useful when you want to test an AST service routine.
The following sections present programming information about declaring
and using ASTs.
8.2.1 Reentrant Code and ASTs
Compiled code that is generated by HP compilers is reentrant. Furthermore, HP compilers normally generate AST routine local data that is reentrant. Data that is shared static, shared external data, Fortran COMMON, and group or system global section data are not inherently reentrant, and usually require explicit synchronization.
Because the queuing mechanism for an AST does not provide for returning a function value or passing more than one argument, you should write an AST routine as a subroutine. This subroutine should use nonvolatile storage that is valid over the life of the AST. To establish nonvolatile storage, you can use the LIB$GET_VM run-time routine. You can also use a high-level language's storage keywords to create permanent nonvolatile storage. For instance, you can use the C language's keywords as follows:
extern static routine malloc(). |
In some cases, a system service that queues an AST (for example,
SYS$GETJPI) allows you to specify an argument for the AST routine . If
you choose to pass the argument, the AST routine must be written to
accept the argument.
8.2.1.1 The Call Frame
When a routine is active under OpenVMS, it has available to it temporary storage on a stack, in a construct known as a stack frame, or call frame. Each time a subroutine call is made, another call frame is pushed onto the stack and storage is made available to that subroutine. Each time a subroutine returns to its caller, the subroutine's call frame is pulled off the stack, and the storage is made available for reuse by other subroutines. Call frames therefore are nested. Outer call frames remain active longer, and the outermost call frame, the call frame associated with the main routine, is normally always available.
A primary exception to this call frame condition is when an exit handler runs. With an exit handler running, only static data is available. The exit handler effectively has its own call frame. Exit handlers are declared with the SYS$DCLEXH system service.
The use of call frames for storage means that all routine-local data is reentrant; that is, each subroutine has its own storage for the routine-local data.
The allocation of storage that is known to the AST must be in memory that is not volatile over the possible interval the AST might be pending. This means you must be familiar with how the compilers allocate routine-local storage using the stack pointer and the frame pointer. This storage is valid only while the stack frame is active. Should the routine that is associated with the stack frame return, the AST cannot write to this storage without having the potential for some severe application data corruptions.
Previous | Next | Contents | Index |