|
HP OpenVMS systems documentation |
Previous | Contents | Index |
In the following example, two programs are communicating through a global section. The first program creates and maps a global section (by using SYS$CRMPSC) and then writes a device name to the section. This program also defines the device terminal and process names and sets the event flags that synchronize the processes.
The second program maps the section (by using SYS$MGBLSC) and then reads the device name and the process that allocated the device and any terminal allocated to that process. This program also writes the process named to the terminal global section where the process name can be read by the first program.
The example program uses the SYS$MGBLSC system service. However, the SYS$MGBLSC_64 system service is the preferred method for mapping global sections for Alpha and I64 systems.
The common event cluster is used to synchronize access to the global section. The first program sets REQ_FLAG to indicate that the device name is in the section. The second program sets INFO_FLAG to indicate that the process and terminal names are available.
Data in a section must be page aligned. The following is the option file used at link time that causes the data in the common area named DATA to be page aligned: PSECT_ATTR = DATA, PAGE
For high-level language usage, use the solitary attribute of the linker. See the HP OpenVMS Linker Utility Manual for an explanation of how to use the solitary attribute. The address range requested for a section must end on a page boundary, so SYS$GETSYI is used to obtain the system page size.
Before executing the first program, you need to write a user-open routine that sets the user-open bit (FAB$V_UFO) of the FAB options longword (FAB$L_FOP). Because the Fortran OPEN statement specifies that the file is new, you should use $CREATE to open it rather than $OPEN. No $CONNECT should be issued. The user-open routine reads the channel number that the file is opened on from the status longword (FAB$L_STV) and returns that channel number to the main program by using a common block (CHANNEL in this example).
!This is the program that creates the global section. ! Define global section flags INCLUDE '($SECDEF)' ! Mask for section flags INTEGER SEC_MASK ! Logical unit number for section file INTEGER INFO_LUN ! Channel number for section file ! (returned from useropen routine) INTEGER SEC_CHAN COMMON /CHANNEL/ SEC_CHAN ! Length for the section file INTEGER SEC_LEN ! Data for the section file CHARACTER*12 DEVICE, 2 PROCESS CHARACTER*6 TERMINAL COMMON /DATA/ DEVICE, 2 PROCESS, 2 TERMINAL ! Location of data INTEGER PASS_ADDR (2), 2 RET_ADDR (2) ! Two common event flags INTEGER REQUEST_FLAG, 2 INFO_FLAG DATA REQUEST_FLAG /70/ DATA INFO_FLAG /71/ ! Data for SYS$GETSYI INTEGER PAGE_SIZE INTEGER*2 BUFF_LEN, ITEM_CODE INTEGER BUFF_ADDR, LENGTH, TERMINATOR EXTERNAL SYI$_PAGE_SIZE COMMON /GETSYI_ITEMLST/ BUFF_LEN, 2 ITEM_CODE, 2 BUFF_ADDR, 2 LENGTH, 2 TERMINATOR ! User-open routines INTEGER UFO_CREATE EXTERNAL UFO_CREATE . . . ! Open the section file STATUS = LIB$GET_LUN (INFO_LUN) IF (.NOT. STATUS) CALL LIB$SIGNAL(%VAL(STATUS)) SEC_MASK = SEC$M_WRT .OR. SEC$M_DZRO .OR. SEC$M_GBL ! (Last element - first element + size of last element + 511)/512 SEC_LEN = ( (%LOC(TERMINAL) - %LOC(DEVICE) + 6 + 511)/512 ) OPEN (UNIT=INFO_LUN, 2 FILE='INFO.TMP', 2 STATUS='NEW', 2 INITIALSIZE = SEC_LEN, 2 USEROPEN = UFO_CREATE) ! Free logical unit number and map section CLOSE (INFO_LUN) ! Get the system page size BUFF_LEN = 4 ITEM_CODE = %LOC(SYI$_PAGE_SIZE) BUFF_ADDR = %LOC(PAGE_SIZE) LENGTH = 0 TERMINATOR = 0 STATUS = SYS$GETSYI(,,,BUFF_LEN,,,) ! Get location of data PASS_ADDR (1) = %LOC (DEVICE) PASS_ADDR (2) = PASS_ADDR(1) + PAGE_SIZE - 1 STATUS = SYS$CRMPSC (PASS_ADDR, ! Address of section 2 RET_ADDR, ! Addresses mapped 2 , 2 %VAL(SEC_MASK), ! Section mask 2 'GLOBAL_SEC', ! Section name 2 ,, 2 %VAL(SEC_CHAN), ! I/O channel 2 ,,,) IF (.NOT. STATUS) CALL LIB$SIGNAL(%VAL(STATUS)) ! Create the subprocess STATUS = SYS$CREPRC (, 2 'GETDEVINF', ! Image 2 ,,,,, 2 'GET_DEVICE', ! Process name 2 %VAL(4),,,) ! Priority IF (.NOT. STATUS) CALL LIB$SIGNAL(%VAL(STATUS)) ! Write data to section DEVICE = '$DISK1' ! Get common event flag cluster and set flag STATUS = SYS$ASCEFC (%VAL(REQUEST_FLAG), 2 'CLUSTER',,) IF (.NOT. STATUS) CALL LIB$SIGNAL(%VAL(STATUS)) STATUS = SYS$SETEF (%VAL(REQUEST_FLAG)) IF (.NOT. STATUS) CALL LIB$SIGNAL(%VAL(STATUS)) ! When GETDEVINF has the information, INFO_FLAG is set STATUS = SYS$WAITFR (%VAL(INFO_FLAG)) IF (.NOT. STATUS) CALL LIB$SIGNAL(%VAL(STATUS)) . . . ! This is the program that maps to the global section ! created by the previous program. ! Define section flags INCLUDE '($SECDEF)' ! Mask for section flags INTEGER SEC_MASK ! Data for the section file CHARACTER*12 DEVICE, 2 PROCESS CHARACTER*6 TERMINAL COMMON /DATA/ DEVICE, 2 PROCESS, 2 TERMINAL ! Location of data INTEGER PASS_ADDR (2), 2 RET_ADDR (2) ! Two common event flags INTEGER REQUEST_FLAG, 2 INFO_FLAG DATA REQUEST_FLAG /70/ DATA INFO_FLAG /71/ ! Data for SYS$GETSYI INTEGER PAGE_SIZE INTEGER*2 BUFF_LEN, ITEM_CODE INTEGER BUFF_ADDR, LENGTH, TERMINATOR EXTERNAL SYI$_PAGE_SIZE COMMON /GETSYI_ITEMLST/ BUFF_LEN, 2 ITEM_CODE, 2 BUFF_ADDR, 2 LENGTH, 2 TERMINATOR . . . ! Get the system page size BUFF_LEN = 4 ITEM_CODE = %LOC(SYI$_PAGE_SIZE) BUFF_ADDR = %LOC(PAGE_SIZE) LENGTH = 0 TERMINATOR = 0 STATUS = SYS$GETSYI(,,,BUFF_LEN,,,) ! Get common event flag cluster and wait ! for GBL1.FOR to set REQUEST_FLAG STATUS = SYS$ASCEFC (%VAL(REQUEST_FLAG), 2 'CLUSTER',,) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) STATUS = SYS$WAITFR (%VAL(REQUEST_FLAG)) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) ! Get location of data PASS_ADDR (1) = %LOC (DEVICE) PASS_ADDR (2) = PASS_ADDR(1) + PAGE_SIZE - 1 ! Set write flag SEC_MASK = SEC$M_WRT ! Map the section STATUS = SYS$MGBLSC (PASS_ADDR, ! Address of section 2 RET_ADDR, ! Address mapped 2 , 2 %VAL(SEC_MASK), ! Section mask 2 'GLOBAL_SEC',,) ! Section name IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) ! Call GETDVI to get the process ID of the ! process that allocated the device, then ! call GETJPI to get the process name and terminal ! name associated with that process ID. ! Set PROCESS equal to the process name and ! set TERMINAL equal to the terminal name. . . . ! After information is in GLOBAL_SEC STATUS = SYS$SETEF (%VAL(INFO_FLAG)) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) END |
Page-file sections are used to store temporary data in private or global (shared) sections of memory. Images that use 64-bit addressing can map and access an amount of dynamic virtual memory that is larger than the amount of physical memory available on the system.
With this design, if a process requires additional page-file space, page files can be allocated dynamically. Space is not longer reserved in a distinct page file, and pages are not bound to an initially assigned page file. Instead, if modified pages must be written back, they are written to the best available page file.
Each page or swap file can hold approximately 16 million pages (128 GB), and up to 254 page or swap files can be installed. Files larger than 128 GB are installed as multiple files.
Note the following DCL command display changes and system parameter changes as a result of the larger page-file section design:
$ SHOW MEMORY/FILES System Memory Resources on 22-MAY-2000 19:04:19.67 Swap File Usage (8KB pages): Index (1) Free Size DISK$ALPHASYS:[SYS48.SYSEXE]SWAPFILE.SYS 1 904 904 DISK$SWAP:[SYS48.SYSEXE]SWAPFILE.SYS;1 2 1048 1048 Total size of all swap files: 1952 Paging File Usage (8KB pages): Index (2) Free Size DISK$PAGE:[SYS48.SYSEXE]PAGEFILE.SYS;1 253 16888 16888 DISK$ALPHASYS:[SYS48.SYSEXE]PAGEFILE.SYS 254 16888 16888 Total size of all paging files: 33776 Total committed paging file usage: (3) 1964 |
$ SHOW MEMORY/FILES/FULL System Memory Resources on 22-MAY-2000 18:47:10.21 Swap File Usage (8KB pages): Index Free Size DISK$ALPHASYS:[SYS48.SYSEXE]SWAPFILE.SYS 1 904 904 Paging File Usage (8KB pages): Index Free Size DISK$ALPHASYS:[SYS48.SYSEXE]PAGEFILE.SYS 254 16888 16888 Total committed paging file usage: 1960 |
This chapter describes the use of system services and run-time routines that VAX systems use to manage memory. It contains the following sections:
Section 13.1 describes the page size on VAX systems.
Section 13.2 describes the layout of virtual address space.
Section 13.3 describes extended addressing enhancements on selected VAX systems.
Section 13.4 describes the three levels of memory allocation routines.
Section 13.5 discusses how to use system services to add virtual
address space, adjust working sets, control process swapping, and
create and manage sections.
13.1 Virtual Page Size
To facilitate memory protection and mapping, the virtual addresss space
on VAX systems is subdivided into segments of 512-byte sizes called
pages. (On Alpha systems, memory page sizes are much
larger and vary from system to system. See Chapter 12 for
information about Alpha page sizes.) Versions of system services and
run-time library routines that accept page-count values as arguments
interpret these arguments in 512-byte quantities. Services and routines
automatically round the specified addresses to page boundaries.
13.2 Virtual Address Space
The initial size of a process's virtual address space depends on the size of the image being executed. The virtual address space of an executing program consists of the following three regions:
A summary of these regions appears in Figure 13-1.
Figure 13-1 Virtual Address Overview on VAX Systems
The memory management routines map and control the relationship between physical memory and the virtual address space of a process. These activities are, for the most part, transparent to you and your programs. In some cases, however, you can make a program more efficient by explicitly controlling its virtual memory usage.
The maximum size to which a process can increase its address space is controlled by the system parameter VIRTUALPAGECNT.
Using memory management system services, a process can add a specified number of pages to the end of either the program region or the control region. Adding pages to the program region provides the process with additional space for image execution, for example, for the dynamic creation of tables or data areas. Adding pages to the control region increases the size of the user stack. As new pages are referenced, the stack is automatically expanded, as shown in Figure 13-2. (By using the STACK= option in a linker options file, you can also expand the user stack when you link the image.)
Figure 13-2 illustrates the layout of a process's virtual memory. The initial size of a process's virtual address space depends on the size of the image being executed.
Figure 13-2 Layout of VAX Process Virtual Address Space
Selected VAX systems have extended addressing (XA) as part of the memory management subsystem. Extended addressing enhancement is supported on the VAX 6000 Model 600, VAX 7000 Model 600, and VAX 10000 Model 600 systems. Extended addressing contains the following two major enhancements that affect system images, system integrated products (SIPs), privileged layered products (LPs), and device drivers:
Extended physical addressing increases the size of a physical address from 30 bits to 32 bits. This increases the capacity for physical memory from 512 MB to 3.5 GB as shown in Figure 13-3. The 512 MB is still reserved for I/O and adapter space.
Figure 13-3 Physical Address Space for VAX Systems with XPA
Extended virtual addressing (XVA) increases the size of the virtual page number field in the format of a system space address from 21 bits to 22 bits. The region of system virtual address space, known as the reserved region or S1 space, is appended to the existing region of system virtual address space known as S0 space, thereby creating a single region of system space. As a result, the system virtual address space increases from 1 GB to 2 GB as shown in Figure 13-4.
Figure 13-4 Virtual Address Space for VAX Systems with XVA
As shown in Figure 13-3, extended addressing increases the maximum
physical address space supported by VAX systems from 1 GB to 4 GB. This
is accomplished by expanding the page frame number (PFN) field in a
page table entry (PTE) from 21 bits to 23 bits, and implementing
changes in the memory management arrays that are indexed by PFN. Both
the process page table entry and system page table entry are changed.
13.4 Levels of Memory Allocation Routines
Sophisticated software systems must often create and manage complex data structures. In these systems, the size and number of elements are not always known in advance. You can tailor the memory allocation for these elements by using dynamic memory allocation. By managing the memory allocation, you can avoid allocating fixed tables that may be too large or too small for your program. Managing memory directly can improve program efficiency. By allowing you to allocate specific amounts of memory, the operating system provides a hierarchy of routines and services for memory management. Memory allocation and deallocation routines allow you to allocate and free storage within the virtual address space available to your process.
There are three levels of memory allocation routines:
SYS$EXPREG (Expand Region)
SYS$CRETVA (Create Virtual Address Space)
SYS$DELTVA (Delete Virtual Address Space)
SYS$CRMPSC (Create and Map Section)
SYS$MGBLSC (Map Global Section)
SYS$DGBLSC (Delete Global Section)
LIB$CREATE_VM_ZONE
LIB$CREATE_USER_VM_ZONE
LIB$DELETE_VM_ZONE
LIB$FIND_VM_ZONE
LIB$RESET_VM_ZONE
LIB$SHOW_VM_ZONE
LIB$VERIFY_VM_ZONE
Modular application programs can call routines at any or all levels of the hierarchy, depending on the kinds of services the application program needs. You must observe the following basic rule when using multiple levels of the hierarchy:
Memory that is allocated by an allocation routine at one level of the hierarchy must be freed by calling a deallocation routine at the same level of the hierarchy. For example, if you allocated a page of memory by calling LIB$GET_VM_PAGE, you can free it only by calling LIB$FREE_VM_PAGE.
Figure 13-5 shows the three levels of memory allocation routines.
Figure 13-5 Hierarchy of VAX Memory Management Routines
For information about using memory management RTLs, see Chapter 14.
13.5 Using System Services for Memory Allocation
This section describes how to use system services to perform the following tasks:
The system services allow you to add address space anywhere within the process's program region (P0) or control region (P1). To add address space at the end of P0 or P1, use the Expand Program/Control Region (SYS$EXPREG) system service. SYS$EXPREG optionally returns the range of virtual addresses for the new pages. To add address space in other portions of P0 or P1, use SYS$CRETVA.
The format for SYS$EXPREG is as follows:
SYS$EXPREG (pagcnt ,[retadr] ,[acmode] ,[region]) |
Specifying the Number of Pages
Use the pagcnt argument to specify the number of pages to add to the end of the region. The range of addresses where the new pages are added is returned in retadr.
Use the acmode argument to specify the access to be assigned to the newly created pages.
Use the region argument to specify whether to add the pages to the end of the P0 or P1 region. This argument is optional.
To deallocate pages allocated with SYS$EXPREG, use SYS$DELTVA.
The following example illustrates how to add 4 pages to the program region of a process by writing a call to the SYS$EXPREG system service:
#include <stdio.h> #include <ssdef.h> main() { unsigned int status, retadr[1],pagcnt=4, region=0; /* Add 4 pages to P0 space */ status = SYS$EXPREG( pagcnt, &retadr, 0, region); if (( status & 1) != 1) LIB$SIGNAL( status ); else printf("Starting address: %d Ending address: %d\n", retadr.lower,retadr.upper); } |
The value 0 is passed in the region argument to specify that the pages are to be added to the program region. To add the same number of pages to the control region, you would specify REGION=#1.
Note that the region argument to SYS$EXPREG is optional; if it is not specified, the pages are added to or deleted from the program region by default.
The SYS$EXPREG service can add pages only to the end of a particular region. When you need to add pages to the middle of these regions, you can use the Create Virtual Address Space (SYS$CRETVA) system service. Likewise, when you need to delete pages created by either SYS$EXPREG or SYS$CRETVA, you can use the Delete Virtual Address Space (SYS$DELTVA) system service. For example, if you have used the SYS$EXPREG service twice to add pages to the program region and want to delete the first range of pages but not the second, you could use the SYS$DELTVA system service, as shown in the following example:
#include <stdio.h> #include <ssdef.h> struct { unsigned int lower, upper; }retadr1, retadr2, retadr3; main() { unsigned int status, pagcnt=4, region=0; /* Add 4 pages to P0 space */ status = SYS$EXPREG( pagcnt, &retadr1, 0, region); if (( status & 1) != 1) LIB$SIGNAL( status ); else printf("Starting address: %d ending address: %d\n", retadr1.lower,retadr1.upper); /* Add 3 more pages to P0 space */ pagcnt = 3; status = SYS$EXPREG( pagcnt, &retadr2, 0, region); if (( status & 1) != 1) LIB$SIGNAL( status ); else printf("Starting address: %d ending address: %d\n", retadr2.lower,retadr2.upper); /* Delete original allocation */ status = SYS$DELTVA( &retadr1, &retadr3, 0 ); if (( status & 1) != 1) LIB$SIGNAL( status ); else printf("Starting address: %d ending address: %d\n", retadr1.lower,retadr1.upper); } |
In this example, the first call to SYS$EXPREG adds 4 pages to the program region; the virtual addresses of the created pages are returned in the 2-longword array at retadr1. The second call adds 3 pages and returns the addresses at retadr2. The call to SYS$DELTVA deletes the first 4 pages that were added.
Be aware that using SYS$CRETVA presents some risk because it can delete pages that already exist if those pages are not owned by a more privileged access mode. Further, if those pages are deleted, no notification is sent. Therefore, unless you have complete control over an entire system, use SYS$EXPREG or the RTL routines to allocate address space. |
Previous | Next | Contents | Index |