HP OpenVMS Systems Documentation

Content starts here

OpenVMS Alpha Guide to Upgrading Privileged-Code Applications


Previous Contents Index

Part II
Enhancing Privileged-Code Applications


Chapter 4
Modifying Device Drivers to Support 64-Bit Addressing

This chapter describes how to modify customer-written device drivers to support 64-bit addresses.

For more information about the data structures and routines described in this chapter, see Appendix A and Appendix B.

4.1 Recommendations for Modifying Device Drivers

Before you can modify a device driver to support 64-bit addresses, your driver must recompile and relink without errors on OpenVMS Alpha Version 7.0. See Chapter 2. If you are using OpenVMS-supplied FDT routines, supporting 64-bit addresses can be automatic or easily obtained. Device drivers written in C are usually easier to modify than drivers written in MACRO-32. Drives using direct I/O are usually easier to modify than those using buffered I/O.

When your device driver runs successfully as a 32-bit addressable driver on OpenVMS Alpha Version 7.0, you can modify it to support 64-bit addresses as follows:

  • Select the functions that you want to support 64-bit functions.
  • Follow your IRP$L_QIO_P1 value and promote all references to 64-bit addresses.
  • Declare 64-bit support for the I/O function.

The remaining sections in this chapter provide more information about these recommendations.

4.2 Mixed Pointer Environment in C

OpenVMS Alpha 64-bit addressing support for mixed pointers includes the following features:

  • OpenVMS Alpha 64-bit virtual address space layout that applies to all processes. (There are no special 64-bit processes or 32-bit processes.)
  • 64-bit pointer support for addressing the entire 64-bit OpenVMS Alpha address space layout including P0, P1, and P2 address spaces and S0/S1, S2, and page table address spaces.
  • 32-bit pointer compatibility for addressing P0, P1, and S0/S1 address spaces.
  • Many new 64-bit system services which support P0, P1, and P2 space addresses.
  • Many existing system services enhanced to support 64-bit addressing.
  • 32-bit sign-extension checking for all arguments passed to 32-bit pointer only system serivces.
  • C and MACRO-32 macros for handling 64-bit addresses.

To support 64-bit addresses in device drivers, you must use the new version (V5.2) of the DEC C compiler as follows:

  • Compile your device driver using /POINTER_SIZE=32


    $ CC/STANDARD=RELAXED_ANSI89 -
        /INSTRUCTION=NOFLOATING_POINT -
        /EXTERN=STRICT -
        /POINTER_SIZE=32 -
        LRDRIVER+SYS$LIBRARY:SYS$LIB_C.TLB/LIBRARY
    
  • #pragma __required_pointer_size 32|64
  • 64-bit pointer types are defined by header files; e.g.


    #include <far_pointers.h>
    VOID_PQ  user_va;  /* 64-bit "void *" */
    ...
    #include <ptedef.h>
    PTE *    svapte;   /* 32-bit pointer to a PTE */
    PTE_PQ   va_pte;   /* Quadword pointer to a PTE */
    PTE_PPQ  vapte_p;  /* Quadword pointer to a
                        * quadword pointer to a PTE */
    
  • Pointer size truncation warning


    p0_va = p2_va;
    ^
    %CC-W-MAYLOSEDATA, In this statement, "p2_va" has
       a larger data size than "short pointer to char"
    

4.3 $QIO Support for 64-Bit Addresses

The $QIO and $QIOW system services accept the following arguments:


$QIO[W] efn,chan,func,iosb,astadr,astprm,p1,p2,p3,p4,p5,p6

These services have a 64-bit friendly interface (as described in OpenVMS Alpha Guide to 64-Bit Addressing and VLM Features)1, which allows these services to support 64-bit addresses.

Table 4-1 summarizes the changes to the data types of the $QIO and $QIOW system service arguments to accommodate 64-bit addresses.

Table 4-1 $QIO [W] Argument Changes
Argument Prior Type New Type Description
efn Unsigned longword - Event flag number. Unchanged.
chan Unsigned word - Channel number. Unchanged.
func Unsigned longword - I/O function code. Unchanged.
iosb 32-bit pointer 1 64-bit pointer Pointer to a quadword I/O status block (IOSB). The IOSB format is unchanged.
astadr 32-bit pointer 1 64-bit pointer Procedure value of the caller's AST routine. On Alpha systems, the procedure value is a pointer to the procedure descriptor.
astprm Unsigned longword 2 Quadword Argument value for the AST routine.
P1 Longword 2 Quadword Device-dependent argument. Often P1 is a buffer address.
P2 Longword 2 Quadword Device-dependent argument. Only the low-order 32-bits will be used by system-supplied FDT routines that use P2 as the buffer size.
P3 Longword 2 Quadword Device-dependent argument.
P4 Longword 2 Quadword Device-dependent argument.
P5 Longword 2 Quadword Device-dependent argument.
P6 Longword 2 Quadword Device-dependent argument. Sometimes P6 is used to contain the address of a diagnostic buffer.

132-bit pointer was sign-extended to 64 bits as required by the OpenVMS Calling Standard.
232-bit longword value was sign-extended to 64 bits as required by the OpenVMS Calling Standard.

Usually the $QIO P1 argument specifies a buffer address. All the system-supplied upper-level FDT routines that support the read and write functions use this convention. The P1 argument determines whether the caller of the $QIO service requires 64-bit support. If the $QIO system service rejects a 64-bit I/O request, the following fatal system error status is returned:


SS$_NOT64DEVFUNC  64-bit address not supported by device for this function

This fatal condition value is returned under the following circumstances:

  • The caller has specified a 64-bit virtual address in the P1 device dependent argument, but the device driver does not support 64-bit addresses with the requested I/O function.
  • The caller has specified a 64-bit address for a diagnostic buffer, but the device driver does not support 64-bit addresses for diagnostic buffers.
  • Some device drivers may also return this condition value when 64-bit buffer addresses are passed using the P2 through P6 arguments and the driver does not support a 64-bit address with the requested I/O function.

For more information about the $QIO, $QIOW, and $SYNCH system services, see the OpenVMS System Services Reference Manual: GETUTC--Z.

Note

1 This manual has been archived but is available on the OpenVMS Documentation CD-ROM. This information has also been included in the OpenVMS Programming Concepts Manual, Volume I.

4.4 Declaring Support for 64-Bit Addresses in Drivers

Device drivers declare that they can support a 64-bit address by individual function. The details vary depending on the language used to code the initialization of the driver's Function Decision Table.

4.4.1 Drivers Written in C

Drivers written in C use the ini_fdt_act macro to initialize an FDT entry for an I/O function. This macro uses the DRIVER$INI_FDT_ACT routine. Both the macro and the routine have been enhanced for OpenVMS Alpha Version 7.0.

The format of the macro in releases prior to OpenVMS Alpha Version 7.0 was:


ini_fdt_act (fdt, func, action, bufflag)

where the bufflag parameter must be one of the following:

  BUFFERED The specified function is buffered.
  NOT_BUFFERED The specified function is direct. This is a synonym for DIRECT.
  DIRECT The specified function is direct. This is a synonym for NOT_BUFFERED.

The use of the bufflag parameter has been enhanced to include the declaration of 64-bit support by allowing 3 additional values:

  BUFFERED_64 The specified function is buffered and supports a 64-bit address in the p1 parameter.
  NOT_BUFFERED_64 The specified function is direct and supports a 64-bit address in the p1 parameter.
  DIRECT_64 The specified function is direct and supports a 64-bit address in the p1 parameter.

If a driver does not support a 64-bit address on any of its functions, there is no need to change its use of the ini_fdt_act macro.

For example, the following C code segment declares that the IO$_READVBLK and IO$_READLBLK functions support 64-bit addresses.


ini_fdt_act (&driver$fdt, IO$_SENSEMODE, my_sensemode_fdt, BUFFERED);
ini_fdt_act (&driver$fdt, IO$_SETMODE,   my_setmode_fdt,   BUFFERED);
ini_fdt_act (&driver$fdt, IO$_READVBLK,  acp_std$readblk,  DIRECT_64);
ini_fdt_act (&driver$fdt, IO$_READLBLK,  acp_std$readblk,  DIRECT_64);

The interpretation of the bufflag parameter to the DRIVER$INI_FDT_ACT routine has been enhanced to support the new values and the setting of the 64-bit support mask in the FDT data structure.

4.4.2 Drivers Written in MACRO-32

As of OpenVMS Alpha Version 7.0, drivers written in MACRO-32 use a new FDT_64 macro to declare the set of I/O functions for which the driver supports 64-bit addresses. The use of the FDT_64 macro is similar to the use of the existing FDT_BUF macro. If a driver does not support a 64-bit address on any of its functions, there is no need to use the new FDT_64 macro.

For example, the following MACRO-32 code segment declares that the IO$_READVBLK and IO$_READLBLK functions support 64-bit addresses.


FDT_INI   MY_FDT
FDT_BUF   <SENSEMODE,SETMODE>
FDT_64    <READVBLK,READLBLK>
FDT_ACT   ACP_STD$READBLK, <READVBLK,READLBLK>

4.4.3 Drivers Written in BLISS

As of OpenVMS Alpha Version 7.0, drivers written in BLISS-32 and BLISS-64 use a new optional keyword parameter, FDT_64, to the existing FDTAB macro to declare the set of I/O functions that support 64-bit addresses. The use of the new FDT_64 parameter is similar to the use of the existing FDT_BUF parameter. If a driver does not support a 64-bit address on any of its functions, there is no need to use the new FDT_64 parameter.

For example, the following BLISS code segment declares that the IO$_READVBLK and IO$_READLBLK functions support 64-bit addresses.


FDTAB (
    FDT_NAME = MY_FDT,
    FDT_BUF  = (SENSEMODE,SETMODE),
    FDT_64   = (READVBLK,READLBLK),
    FDT_ACT  = (ACP_STD$READBLK, (READVBLK,READLBLK) )
    );

4.5 I/O Mechanisms

Table 4-2 summarizes the I/O mechanisms that support 64-bit addresses.

Table 4-2 Summary of 64-Bit Support by I/O Mechanism
Mechanism 64-Bits Comments
Simple buffered I/O Yes 32/64-bit BUFIO packet headers
     
Complex Buffered I/O No Used by XQP and ACPs
     
Complex Chained Buffered I/O Yes New cells in CXB
     
Direct I/O Yes Cross-process PTE problem
     
LAN VCI Yes Cross-process PTE problem
     
VMS I/O Cache Yes 64-bit support is transparent to other components
     
Buffer Objects Yes Special case of direct I/O
     
Diagnostic buffers Yes Driver-wide attribute

4.5.1 Simple Buffered I/O

Figure 4-1 shows a 32-bit buffered I/O packet header.

Figure 4-1 32-Bit Buffered I/O Packet Header


BUFIO$PS_PKTDATA Contains pointer to buffered data in packet
   
BUFIO$PS_UVA32 Contains 32-bit user virtual address

  • No symbolic offsets currently defined.
  • Frequent use of manifest constants; for example:


         MOVAB   12(R2),(R2)
    

  • Dependencies on the packet header layout can be anywhere in the driver code path.
  • Drivers allocate and initialize these packets.

A 64-bit buffered packet header is as shown in Figure 4-2.

Figure 4-2 New 64-Bit Buffered I/O Packet Header


BUFIO$PS_PKTDATA Contains pointer to buffered data in packet
   
BUFIO$PS_UVA32 Must contain BUFIO$K_64 (-1) value
   
BUFIO$PQ_UVA64 Contains 64-bit user virtual address

  • BUFIO structures and offsets now defined.
  • Both 32-bit and 64-bit formats supported.
  • BUFIO packets are self-identifying.
  • New routines are EXE_STD$ALLOC_BUFIO_64, EXE_STD$ALLOC_BUFIO_32.
  • Used for diagnostic buffers as well.

4.5.2 Direct I/O

  • The caller's virtual address for the buffer is used only in FDT context.
  • Most of the driver identifies buffer start by IRP$L_SVAPTE and IRP$L_BOFF.
  • Driver "layering" in start I/O or fork environments.
  • Most drivers use either OpenVMS-supplied upper-level FDT routines or FDT support routines.
  • The moving of the page tables has a significant impact:
    1. Only the current process's PPTEs are available at any given time.
      This is called the "cross-process PTE access" problem.
    2. A 64-bit address is required to access page table entries in page table space: process, global, and system PTEs.
    3. Because "SVAPTE, BOFF, BCNT" are used in many device drivers, the impact of 1 and 2 is not insignificant.

4.5.3 Direct I/O Buffer Map (DIOBM)

Figure 4-3 shows the DIOBM data structure.

Figure 4-3 Direct I/O Buffer Map Data Structure


  • Use PTE vector in DIOBM for buffers up to 64 Kb
  • Use "secondary" DIOBM for buffers up to 5.2 Mb
  • Use PTE window method with DIOBM for larger buffer
  • DIOBM embedded in IRP, IRPE, VCRP, DCBE
  • MMG_STD$IOLOCK_BUF replaces MMG_STD$IOLOCK
  • New DIOBM routines; for example IOC_STD$FILL_DIOBM
  • Also of interest to LAN VCI clients

4.5.4 64-Bit AST

Figure 4-4 shows a 64-Bit AST.

Figure 4-4 64-Bit AST


ACB$B_RMOD New ACB$V_FLAGS_VALID bit (last spare bit)
   
ACB$L_FLAGS Contains ACB$V_64BITS bit (was filler space)
   
ACB$L_ACB64X Byte offset to ACB64X structure

  • Both ACB and ACB64X formats are supported.
  • ACB packets are self-identifying.
  • An ACB64 is an ACB with an immediate ACB64X.

4.5.5 64-Bit ACB Within the IRP

An embedded ACB64 at the start of the IRP is shown in Figure 4-5.

Figure 4-5 Embedded ACB64


An IRP created by the $QIO system service uses the ACB64 layout unconditionally.

IRP$B_RMOD New ACB$V_FLAGS_VALID bit always set
   
IRP$L_ACB_FLAGS New ACB$V_64BITS bit always set
   
IRP$L_ACB64X_OFFSET Contains hex 28

4.5.6 I/O Function Definitions

I/O functions are defined as follows:

  1. Direct I/O, raw data transfer
    Functions in this category are implemented as direct I/O operations and transfer raw data from the caller's buffer to the device without significant alteration or interpretation of the data.
  2. Direct I/O, control data transfer
    Functions in this category are implemented as direct I/O operations and transfer control information from the caller's buffer to the device driver. The device driver usually interprets the data or uses it to control the operation of the device.
    Usually these functions do not support 64-bit addresses. In contrast to the raw data transfers, control data transfers tend to be smaller and are invoked less frequently. Thus, there is less need to be able to store such data in a 64-bit addressable region. The only area impacted in the driver are the corresponding FDT routines. However, control data often introduces the problem of embedded 32-bit pointers.
  3. Buffered I/O, raw data transfer
    Functions in this category are implemented as buffered I/O operations by the device driver but are otherwise the same type of raw data transfer from the caller's buffer as the first category.
  4. Buffered I/O, control data transfer
    Functions in this category are implemented as buffered I/O operations by the device driver but are otherwise the same type of control data transfer from the caller's buffer as the second category.
  5. Control operation, no data transfer, with parameters
    Functions in this category control the device and do not transfer any data between a caller's buffer and the device. Since there is no caller's buffer it does not matter whether the function is designated as a buffered or direct I/O function. The control operation has parameters that are specified in p1 through p6 however these parameters do not point to a buffer.
  6. Control operation, no data transfer, with no parameters
    Functions in this category control the device and do not transfer any data between a caller's buffer and the device. Since there is no caller's buffer it does not matter whether the function is designated as a buffered or direct I/O function. In addition, there are no parameters for these functions.
Table 4-3 summarizes the I/O functions described in this section.

Table 4-3 Guidelines for 64-Bit Support by I/O Function
Function Type 64-Bits Area of Impact
     
Direct I/O, raw data transfer Yes FDT only
     
Direct I/O, control data transfer No FDT only
     
Buffered I/O, raw data transfer No/yes Entire driver, new BUFIO packet
     
Buffered I/O, control data transfer No Entire driver, new BUFIO packet
     
Control, no data transfer, param No Entire path but usually simple
     
Control, no data transfer, no params Moot None
     


Previous Next Contents Index