Upgrading Privileged-Code Applications on OpenVMS Alpha and
OpenVMS I64 Systems
Chapter 1
Introduction
This manual is divided into two parts: the first, which discusses changes
to privileged code on OpenVMS Alpha to support 64-bit addressing and kernel
threads; and the second, which discusses the changes necessary to privileged
code and to OpenVMS physical infrastructure to support the OpenVMS operating
system on the Intel® Itanium® architecture.
This is not an application porting guide. If you are looking for
information on how to port applications that run on OpenVMS Alpha to OpenVMS
I64, see Porting Applications from HP OpenVMS Alpha to HP OpenVMS Industry
Standard 64 for Integrity Servers.
1.1 Quick Description of OpenVMS Alpha 64-Bit Virtual
Addressing
OpenVMS Alpha Version 7.0 made significant changes to OpenVMS Alpha privileged
interfaces and data structures to support 64-bit virtual addresses and kernel
threads.
For 64-bit virtual addresses, these changes were necessary infrastructure
work to enable processes to grow their virtual address space beyond the existing
1 GB limit of P0 space and the 1 GB limit of P1 space to include P2 space,
making a total of 8TB. Likewise, S2 is the extension of system space.
Support for 64-bit virtual addresses, makes more of the 64-bit virtual address
space defined by the Alpha architecture available to the OpenVMS Alpha operating
system and to application programs. The 64-bit address features allow processes
to map and access data beyond the previous limits of 32-bit virtual addresses.
Both process-private and system virtual address space now extend to 8 TB.
In addition to the dramatic increase in virtual address space, OpenVMS Alpha
7.0 significantly increases the amount of physical memory that can be used
by individual processes.
Many tools and languages supported by OpenVMS Alpha (including the Debugger,
run-time library routines, and DEC C) are enhanced to support 64-bit virtual
addressing. Input and output operations can be performed directly to and from
the 64-bit addressable space by means of RMS services, the $QIO system service,
and most of the device drivers supplied with OpenVMS Alpha systems.
Underlying this are new system services that allow an application to allocate
and manage the 64-bit virtual address space that is available for process-private
use.
For more information about OpenVMS Alpha 64-bit virtual address features,
see the OpenVMS Alpha Guide to 64-Bit Addressing and VLM Features1.
As a result of these changes, some privileged-code applications might need
to make source-code changes to run on OpenVMS Alpha Version 7.0 and later.
This chapter briefly describes OpenVMS Alpha Version 7.0 64-bit virtual address
and kernel threads support and suggests how you should use this guide to ensure
that your privileged-code application runs successfully on OpenVMS Alpha Version
7.0 and later.
Note
1 This manual has been archived but
is available on the OpenVMS Documentation CD-ROM.
This information has also been included in the OpenVMS Programming
Concepts Manual, Volume I.
|
1.2 Quick Description of OpenVMS Alpha Kernel Threads
OpenVMS Alpha Version 7.0 provides kernel threads features, which extend process
scheduling capabilities to allow threads of a process to run concurrently on
multiple CPUs in a multiprocessor system. The only interface to kernel threads
is through the DECthreads package. Existing threaded code that uses either
the CMA API or the POSIX threads API should run without change and gain the
advantages provided by the kernel threads project.
Kernel threads support causes significant changes to the process structure
within OpenVMS (most notably to the process control block (PCB)). Although
kernel threads support does not explicitly change any application programming
interfaces (APIs) within OpenVMS, it does change the use of the PCB in such
a way that some existing privileged code may be impacted.
Kernel threads allows a multithreaded process to execute code flows independently
on more than one CPU at a time. This allows a threaded application to make
better use of multiple CPUs in an SMP system. DECthreads uses these independent
execution contexts as virtual CPUs and schedules application threads on them.
OpenVMS then schedules the execution contexts (kernel threads) onto physical
CPUs. By providing a callback mechanism from the OpenVMS scheduler to the DECthreads
thread scheduler, scheduling latencies inherent in user-mode-only thread managers
is greatly reduced. OpenVMS informs DECthreads when a thread has blocked in
the kernel. Using this information, DECthreads can then opt to schedule some
other ready thread.
For more information about kernel threads, refer to the Bookreader version
of the OpenVMS Programming Concepts Manual and Chapter
6 in this guide.
1.3 Quick Description of OpenVMS Industry Standard 64
OpenVMS I64 8.2EFT has a 64-bit model and basic system functions that are
similar to OpenVMS Alpha. OpenVMS Alpha and OpenVMS I64 are produced from a
single-source code base. OpenVMS I64 has the same look and feel as OpenVMS
Alpha, but runs on the HP Minor changes to the operating system were made to
accomodate the architectire, but the basic structure and capabilities of the
operating system are the same.
1.4 How to Use This Guide
Read Part I to learn about the changes that might be required for privileged-code
applications to run on OpenVMS Alpha Version 7.0 and how to enhance customer-written
system services and device drivers with OpenVMS Version 7.0 features.
Refer to Part II for information about changes that might be required for
privileged-code applications to run on OpenVMS I64. In most cases, you can
change your code so that it is common code with OpenVMS Alpha.
Refer to the Appendixes for more information about some of the data structures
and routines mentioned throughout this guide.
Part I
Privileged-Code Changes for OpenVMS Alpha
Chapter 2
Upgrading Privileged Software to OpenVMS Alpha Version 7.0
The new features provided in OpenVMS Alpha Version 7.0 have required corresponding
changes in internal system interfaces and data structures. These internal changes
might require changes in some privileged software.
This chapter contains recommendations for upgrading privileged-code applications
to ensure that they run on OpenVMS Alpha Version 7.0. Once your application
is running on OpenVMS Alpha Version 7.0, you can enhance it as described in
Part II.
2.1 Recommendations for Upgrading Privileged-Code Applications
To ensure that a privileged-code application runs on OpenVMS Alpha Version
7.0, do the following:
- Recompile and relink your application to identify almost all of the places
where source changes will be necessary. Some changes can be identified by
inspection.
- If you encounter compile-time or link-time warnings or errors, you must
make the source-code changes required to resolve them.
See Section 2.1.1 for descriptions
of the infrastructure changes that can affect your applications and more
information about how to handle them.
- Refer to Chapter 3 for information
about the data structure fields, routines, macros, and system data cells
obviated by OpenVMS Alpha Version 7.0 that might affect privileged-code applications.
- Once your application recompiles and relinks without errors, you can enhance
it to take advantage of the OpenVMS Alpha Version 7.0 features described
in Part 2.
2.1.1 Summary of Infrastructure Changes
This section summarizes OpenVMS Alpha Version 7.0 changes to the kernel that
may require source changes in customer-written drivers and inner-mode software.
The recommendations in bold face type indicate how each change can be handled.
- Page tables have moved from the balance set slots to page table space.
(Compile and link the application.)
- The global page table has moved from S0/S1 space to S2 space. (Compile
and link the application.)
- The PFN database has moved from S0/S1 space to S2 space. (Compile
C applications. Inspect MACRO applications for changes that might not cause
warning messages. Link all applications.)
- PFN database entry format has changed. (Compile and link the application.)
- Routines MMG$IOLOCK and MMG$UNLOCK are obsolete and are replaced by MMG_STD$IOLOCK_BUF
and DIOBM. (Compile and link the application.)
- A buffer locked for direct I/O is now described by SVAPTE, BOFF, BCNT,
and a DIOBM.
Be aware of code that clears IRP$L_SVAPTE to keep a buffer locked even after
the IRP is reused or deleted. (Inspect the code for changes.)
- A single IRPE can only be used to lock down a single region of pages. (Compile
and link the application.)
- Some assumptions about I/O structure field adjacencies may no longer be
true; for example, IRP$L_QIO_P1 and IRP$L_QIO_P2 are now more than 4 bytes
apart. (Compile, link, inspect the code.)
- The IRP$L_AST, IRP$L_ASTPRM, and IRP$L_IOSB cells have been removed. (Compile
and link the application.)
- Two types of ACBs; an IRP is always in ACB64 format. (Compile,
link, inspect the code.)
- MMG$SVAPTECHK can longer be used for P0/P1 addresses. In addition, P2/S2
are not allowed; only S0/S1 are supported. (Inspect the code.)
- Two types of buffer objects; buffer objects can be mapped into S2 space.
(Inspect the code.)
The remaining sections in this chapter contain more details about these changes.
Important
All device drivers, VCI clients, and inner-mode components
must be recompiled and relinked to run on OpenVMS Alpha Version 7.0. |
|