|
HP OpenVMS systems documentation |
Previous | Contents | Index |
With kernel threads, the OpenVMS operating system implements the following two features:
Before the implementation of kernel threads, the scheduling model for the OpenVMS operating system was per process. The only scheduling context was the process itself, that is, only one execution context per process. Since a threaded application could create thousands of threads, many of these threads could potentially be executing at the same time. But because OpenVMS processes had only a single execution context, in effect, only one of those application threads was running at any one time. If this multithreaded application was running on a multiprocessor system, the application could not make use of more than a single CPU.
After the implementation of kernel threads, the scheduling model allows for multiple execution contexts within a process; that is, more than one application thread can be executing concurrently. These execution contexts are called kernel threads. Kernel threads allow a multithreaded application to have a thread executing on every CPU in a multiprocessor system. Therefore, kernel threads allow a threaded application to take advantage of multiple CPUs in a symmetric multiprocessing (SMP) system.
The maximum number of kernel threads that can be created in a process
is 256.
2.8.2.2 Efficient Use of the OpenVMS and POSIX Threads Library Schedulers
The user mode thread manager schedules individual user mode application threads. On OpenVMS, POSIX Threads Library is the user mode threading package of choice. Before the implementation of kernel threads, POSIX Threads Library multiplexed user mode threads on the single OpenVMS execution context---the process. POSIX Threads Library implemented parts of its scheduling by using a periodic timer. When the AST executed and the thread manager gained control, the thread manager could then select a new application thread for execution. But because the thread manager could not detect that a thread had entered an OpenVMS wait state, the entire application blocked until that periodic AST was delivered. That resulted in a delay until the thread manager regained control and could schedule another thread. Once the thread manager gained control, it could schedule a previously preempted thread unaware that the thread was in a wait state. The lack of integration between the OpenVMS and POSIX Threads Library schedulers could result in wasted CPU resources.
After the implementation of kernel threads, the scheduling model
provides for scheduler callbacks, which is not the default. A scheduler
callback is an upcall from the OpenVMS scheduler to the thread manager
whenever a thread changes state. This upcall allows the OpenVMS
scheduler to inform the thread manager that the current thread is
stalled and that another thread should be scheduled. Upcalls also
inform the thread manager that an event a thread is waiting on has
completed. The two schedulers are now better integrated, minimizing
application thread scheduling delays.
2.8.2.3 Terminating a POSIX Threads Image
To avoid hangs or a disorderly shutdown of a multithreaded process, HP recommends that you issue an upcall with an EXIT command at the DCL prompt ($). This procedure causes a normal termination of the image currently executing. If the image declared any exit-handling routines, for instance, they are then given control. The exit handlers are run in a separate thread, which allows them to be synchronized with activities in other threads. This allows them to block without danger of entering a self-deadlock due to the handler having been involved in a context which already held resources.
The effect of calling the EXIT command on the calling thread is the same as calling pthread_exit(): the caller's stack is unwound and the thread is terminated. This allows each frame on the stack to have an opportunity to be notified and to take action during the termination, so that it can then release any resource which it holds that might be required for an exit handler. By using upcalls, you have a way out of self-deadlock problems that can impede image rundown.
You can optionally perform a rundown by using the control y EXIT (Ctrl-- Y/EXIT) command. By doing this and with upcalls enabled, you release the exit handler thread. All other threads continue to execute untouched. This removes the possibility of the self-deadlock problem which is common when you invoke exit handlers asynchronously in an existing context. However, by invoking exit handlers, you do not automatically initiate any kind of implicit shutdown of the threads in the process. Because of this, it is up to the application to request explicitly the shutdown of its threads from its exit handler and to ensure that their shutdown is complete before returning from the exit handler. By having the application do this, you ensure that subsequent exit handlers do not encounter adverse operating conditions, such as threads which access files after they've been closed, or the inability to close files because they are being accessed by threads.
Along with using control y EXIT (Ctrl--Y/EXIT) to perform shutdowns,
you can issue a control y (Ctrl--Y/STOP) command. If you use a control
y STOP (Ctrl--Y/STOP) command, it is recommended that you do this with
upcalls. To use a control y STOP (Ctrl--Y/STOP) command, can cause a
disorderly or unexpected outcome.
2.8.3 Kernel Threads Model and Design Features
This section presents the type of kernel threads model that OpenVMS
Alpha and OpenVMS I64 implement, and some features of the operating
system design that changed to implement the kernel thread model.
2.8.3.1 Kernel Threads Model
The OpenVMS kernel threads model is one that implements a few kernel
threads to many user threads with integrated schedulers. With this
model, there is a mapping of many user threads to only several
execution contexts or kernel threads. The kernel threads have no
knowledge of the individual threads within an application. The thread
manager multiplexes those user threads on an execution context, though
a single process can have multiple execution contexts. This model also
integrates the user mode thread manager scheduler with the OpenVMS
scheduler.
2.8.3.2 Kernel Threads Design Features
Design additions and modifications have been made to various features of OpenVMS and include:
With the implementation of OpenVMS kernel threads, all processes are a
threaded process with at least one kernel thread. Every kernel thread
gets stacks for each access mode. Quotas and limits are maintained and
enforced at the process level. The process virtual address space
remains per process and is shared by all threads. The scheduling entity
moves from the process to the kernel thread. In general, ASTs are
delivered directly to the kernel threads. Event flags and locks remain
per process. See Section 2.8.4 for more information.
2.8.3.2.2 Access to Inner Modes
With the implementation of kernel threads, a single threaded process
continues to function exactly as it has in the past. A multithreaded
process may have multiple threads executing in user mode or in user
mode ASTs, as is also possible for supervisor mode. Except in cases
where an activity in inner mode is considered thread
safe, a multithreaded process may have only a single thread
executing in an inner mode at any one time. Multithreaded processes
retain the normal preemption of inner mode by more inner mode ASTs. A
special inner mode semaphore serializes access to inner mode.
2.8.3.2.3 Scheduling
With the implementation of kernel threads, the OpenVMS scheduler
concerns itself with kernel threads, and not processes. At certain
points in the OpenVMS executive at which the scheduler could wait a
kernel thread, it can instead transfer control to the thread manager.
This transfer of control, known as a callback or upcall, allows the
thread manager the chance to reschedule stalled application threads.
2.8.3.2.4 ASTs
With the implementation of kernel threads, ASTs are not delivered to
the process. They are delivered to the kernel thread on which the event
was initiated. Inner mode ASTs are generally delivered to the kernel
thread already in inner mode. If no thread is in inner mode, the AST is
delivered to the kernel thread that initiated the event.
2.8.3.2.5 Event Flags
With the implementation of kernel threads, event flags continue to
function on a per-process basis, maintaining compatibility with
existing application behavior.
2.8.3.2.6 Process Control Services
With the implementation of kernel threads, many process control
services continue to function at the process level. SYS$SUSPEND and
SYS$RESUME system services, for example, continue to change the
scheduling state of the entire process, including all of its threads.
Other services such as SYS$HIBER and SYS$SCHDWK act on individual
kernel threads instead of the entire process.
2.8.4 Kernel Threads Process Structure
This section describes the components that make up a kernel threads process. It describes the following components:
Two primary data structures exist in the OpenVMS executive that describe the context of a process:
The PCB contains fields that identify the process to the system. The PCB comprises contexts that pertain to quotas and limits, scheduling state, privileges, AST queues, and identifiers. In general, any information that is required to be resident at all times is in the PCB. Therefore, the PCB is allocated from nonpaged pool.
The PHD contains fields that pertain to a process's virtual address
space. The PHD contains the process section table. The PHD also
contains the hardware process control block (HWPCB) and a
floating-point register save area. The HWPCB contains the hardware
execution context of the process. The PHD is allocated as part of a
balance set slot.
2.8.4.1.1 Effect of a Multithreaded Process on the PCB and PHD
With multiple execution contexts within the same process, the multiple threads of execution all share the same address space, but have some independent software and hardware context. This change to a multithreaded process results in an impact on the PCB and PHD structures, and on any code that references them.
Before the implementation of kernel threads, the PCB contained much context that was per-process. Now, with the introduction of multiple threads of execution, much context becomes per-thread. To accommodate per-thread context, a new data structure, the kernel thread block (KTB), is created, with the per-thread context removed from the PCB. However, the PCB continues to contain context common to all threads, such as quotas and limits. The new per-kernel thread structure contains the scheduling state, priority, and the AST queues.
The PHD contains the HWPCB that gives a process its single execution
context. The HWPCB remains in the PHD; this HWPCB is used by a process
when it is first created. This execution context is also called the
initial thread. A single threaded process has only this one execution
context. A new structure, the floating-point registers and execution
data block (FRED), is created to contain the hardware context of the
newly created kernel threads. Since all threads in a process share the
same address space, the PHD and page tables continue to describe the
entire virtual memory layout of the process.
2.8.4.2 Kernel Thread Block (KTB)
The kernel thread block (KTB) is a new per-kernel-thread data structure. The KTB contains all per-thread software context moved from the PCB. The KTB is the basic unit of scheduling, a role previously performed by the PCB, and is the data structure placed in the scheduling state queues.
Typically, the number of KTBs a multithreaded process has is the same as the number of CPUs on the system. Actually, the number of KTBs is limited by the value of the system parameter MULTITHREAD. If MULTITHREAD is zero, the OpenVMS kernel support is disabled. With kernel threads disabled, user-level threading is still possible with POSIX Threads Library. The environment is identical to the OpenVMS environment prior to the OpenVMS Version 7.0 release. If MULTITHREAD is nonzero, it represents the maximum number of execution contexts or kernel threads that a process can own, including the initial one.
The KTB, in reality, is not an independent structure from the PCB. Both
the PCB and KTB are defined as sparse structures. The fields of the PCB
that move to the KTB retain their original PCB offsets in the KTB. In
the PCB, these fields are unused. In effect, if the two structures are
overlaid, the result is the PCB as it currently exists with new fields
appended at the end. The PCB and KTB for the initial thread occupy the
same block of nonpaged pool; therefore, the KTB address for the initial
thread is the same as for the PCB.
2.8.4.3 Floating-Point Registers and Execution Data Blocks (FREDs)
To allow for multiple execution contexts, not only are additional KTBs required to maintain the software context, but additional HWPCBs must be created to maintain the hardware context. Each HWPCB has allocated with it space for preserving the contents of the floating-point registers across context switches. Additional bytes are allocated for per-kernel thread data.
The combined structure that contains the HWPCB, floating-point register
save area, and the per-kernel thread data is called the floating-point
registers and execution data (FRED) block. Prior to Version 7.2,
OpenVMS supported 16 kernel threads per process. As of Version 7.2,
OpenVMS supports 256 kernel threads per process. Also, prior to Version
7.3-1, OpenVMS allocated the maximum number of FRED blocks for a given
process when that process was created, even if the process did not
become multithreaded. With Version 7.3-1 and higher, OpenVMS allocated
all FRED blocks as needed.
2.8.4.4 Kernel Threads Region
Much process context resides in P1 space, taking the form of data cells
and the process stacks. Some of these data cells need to be per kernel
thread, as do the stacks. During initialization of the multithread
environment, a kernel thread region in P1 space is initialized to
contain the per-kernel-thread data cells and stacks. The region begins
at the boundary between P0 and P1 space at address 40000000x, and it
grows toward higher addresses and the initial thread's user stack. The
region is divided into per-kernel-thread areas. Each area contains
pages for data cells and the access mode stacks.
2.8.4.5 Per-Kernel Thread Stacks
A process is created with separate stacks in P1 space for the four access modes. On Alpha systems, each access mode has a memory stack. A memory stack is used for storing data local to a procedure, saving register contents temporarily, and recording nested procedure call information. On I64 systems, memory stacks are used for storing data local to a procedure and for saving register contents temporarily, but not for recording nested procedure call information.
To reduce procedure call overhead, the Intel® Itanium® architecture provides a large number of registers. Some, the so-called static registers, are shared by a caller and the procedure it calls; others, the dynamic or stacked registers, are not shared. When a procedure is called, it allocates as many dynamic general registers as it needs. On I64 systems, nested procedure call information is recorded in the dynamic registers.
The I64 systems manage the dynamic registers like a stack, keeping track of each procedure's allocation. Each procedure could, in fact, allocate all the dynamic registers for its own use. Whenever the dynamic register use by nested procedures cannot be accommodated by physical registers, the hardware saves the dynamic registers in an in-memory area established by OpenVMS called the register backing store or register stack. On I64 systems, OpenVMS creates a register stack whenever it creates a memory stack. Unlike memory stacks, register stacks grow from low addresses to high addresses.
Stack sizes are either fixed, determined by a SYSGEN parameter, or expandable. The parameter KSTACKPAGES controls the size of the kernel stack. Supervisor and executive mode stack sizes are fixed.
For the user stack, a more complex situation exists. OpenVMS allocates P1 space from high to lower addresses. The user stack is placed after the lowest P1 space address allocated. This allows the user stack to expand on demand toward P0 space. With the introduction of multiple sets of stacks, the locations of these stacks impose a limit on the size of each area in which they can reside. With the implementation of kernel threads, the user stack is no longer boundless. The initial user stack remains semi-boundless; it still grows toward P0 space, but the limit is the per-kernel thread region instead of P0 space. The default user stack in a process can expand on demand to be quite large, so single threaded applications do not typically run out of user stack.
When an application is written using POSIX Threads Library, however, each POSIX thread gets its own user stack, which is a fixed size. POSIX thread stacks are allocated from the P0 heap. Large stacks might cause the process to exceed its memory quotas. In an extreme case, the P0 region could fill completely, in which case the process might need to reduce the number of threads in use concurrently or make other changes to lessen the demand for P0 memory.
If the application developer underestimates the stack requirements, the application may fail due to a thread overflowing its stack. This failure is typically reported as an access violation and is very difficult to diagnose. To address this problem, yellow stack zones were introduced in OpenVMS Version 7.2 and are available to applications using POSIX Threads Library.
Yellow stack zones are a mechanism by which the stack overflow can be
signaled back to the application. The application can then choose
either to provide a stack overflow handler or do nothing. If the
application does nothing, this mechanism helps pinpoint the failure for
the application developer. Instead of an access violation being
signaled, a stack overflow error is signaled.
2.8.4.6 Per-Kernel-Thread Data Cells
Several pages in P1 space contain process state in the form of data
cells. A number of these cells must have a per-kernel-thread
equivalent. These data cells do not all reside on pages with the same
protection. Because of this, the per-kernel-thread area reserves two
pages for these cells. Each page has a different page protection; one
page protection is user read, user write (URUW); the other is user
read, executive write (UREW).
2.8.4.7 Summary of Process Data Structures
Process creation results in a PCB/KTB, a PHD/FRED, and a set of stacks. All processes have a single kernel thread, the initial thread.
A multithreaded process always begins as a single threaded process. A
multithreaded process contains a PCB/KTB pair and a PHD/FRED pair for
the initial thread; for its other threads, it contains additional KTBs,
additional FREDs, and additional sets of stacks. When the multithreaded
application exits, the process returns to its single threaded state,
and all additional KTBs, FREDs, and stacks are deleted.
2.8.4.8 Kernel Thread Priorities
The SYS$SETPRI system service and the SET PROCESS/PRIORITY DCL command both take a process identification value (PID) as an input and therefore affect only a single kernel thread at a time. If you want to change the base priorities of all kernel threads in a process, you must either make a separate call to SYS$SETPRI or invoke the SET PROCESS/PRIORITY command for each thread.
In addition, a value for the 'policy' parameter to the SYS$SETPRI system service was added. If JPI$K_ALL_THREADS is specified, the call to SYS$SETPRI changes the base priorities of all kernel threads in the target process.
The same support is provided by the ALL_THREADS qualifier to the SET
PROCESS/PRIORITY DCL command.
2.9 THREADCP Command Not Supported on OpenVMS I64
The THREADCP command is not supported on OpenVMS I64. For OpenVMS I64, the SET IMAGE and SHOW IMAGE commands can be used to check and modify the state of threads-related image header bits, similar to the THREADCP command on OpenVMS Alpha. For example, the THREADCP/SHOW image command is analagous to the SHOW IMAGE image command. As another example, the THREADCP/ENABLE=flags image command is analagous to the SET IMAGE/LINKFLAGS=flags image command.
The SHOW IMAGE and SET IMAGE commands are documented in the
HP OpenVMS DCL Dictionary: N--Z.
2.10 KPS Services (Alpha and I64 only)
As of OpenVMS Version 8.2, KPS services enable a thread of execution in one access mode to have multiple stacks. These services were initially developed to allow a device driver to create a fork process with a private stack on which to retain execution context across stalls and restarts. They have been extended to be usable by process context code running in any access mode.
Various OpenVMS components use KPS services to multithread their operations. RMS, for example, can have multiple asynchronous I/O operations in progress in response to process requests from multiple access modes. Each request is processed on a separate memory stack and, on I64, separate register stack as well.
Previous | Next | Contents | Index |