HP OpenVMS Version 8.4 New Features and 
Documentation Overview
 
 
  
Chapter 3 Virtualization Features
This chapter describes the virtualization features of the OpenVMS 
operating system.
3.1 OpenVMS as a Guest Operating System on Integrity VM
 
OpenVMS for Integrity servers Version 8.4 is supported as a guest 
operating system on HP Integrity Virtual Machines (Integrity VM). 
Integrity VM is a soft partitioning and virtualization technology 
within the HP Virtual Server Environment, which enables you to create 
multiple virtual servers or machines with shared resourcing within a 
single HP Integrity server or nPartition.
 
Each virtual machine hosts its own "Guest" operating system instance, 
applications, and users. Integrity VM runs on any Intel VT-i enabled HP 
Integrity servers including blades. On HP Integrity servers, the 
Integrity VM Host runs under HP-UX, while OpenVMS can run as a guest.
3.1.1 Licensing Requirements
For information about licensing OpenVMS as a Guest Operating System on 
Integrity VM, see HP OpenVMS License Management Utility Manual.
3.1.2 Supported Hardware
 
OpenVMS Guest operating system on Integrity VM is supported on VT-i 
(Intel Virtualization Technology for the Intel Itanium architecture) 
enabled Intel Itanium processors. Currently Intel Itanium 9000 and 9100 
series support VT-i.
 
For more information on Integrity VM, see:  
http://h71028.www7.hp.com/enterprise/us/en/os/hpux11i-partitioning-integrity-vm.html
3.1.3 Installing OpenVMS as a Guest Operating System
 
To install OpenVMS as a guest operating system, see Chapter 3 in HP 
OpenVMS Version 8.4 for Integrity Servers Upgrade and Installation 
Manual.
3.1.4 OpenVMS Features as a Guest Operating System
OpenVMS as a guest operating system supports the following features:
 
  - The OpenVMS guest OS is SMP enabled and supports up to 64 GB 
  physical memory.
  
 - The OpenVMS guests support virtualized disk drives and network 
  interfaces provided by Integrity VM. Integrity VM presents disks and 
  logical volumes as SCSI disks (DK devices on OpenVMS guests) and 
  virtual network interfaces as Intel Gigabit Cards (EI devices on 
  OpenVMS guests) regardless of the physical network card or mass storage 
  connection for the Host system.
  
 - Limited support for online migration - supports only the 
  stand-alone guest configurations.
  
 - Supports Accelerated Virtual IO (AVIO) LAN and SCSI drivers.
  
 - Supports management and monitoring of OpenVMS guest operating 
  system using the VSE suite of products.
  
 - OpenVMS guest systems are cluster enabled and supports LAN and 
  cluster over IP.
  
For more information, see the HP OpenVMS Version 8.4 Release 
Notes.
3.2 ID-VSE for OpenVMS
 
The HP Insight Dynamics - Virtual Server Environment (ID-VSE) is an 
integrated suite of multi-platform products that helps you to 
continuously analyze, and optimize physical and virtual server 
resources. It helps you to reduce the cost associated with capacity and 
energy planning, provisioning, upgrades, and making changes in your 
data center.
 
ID-VSE integrates with HP Systems Insight Manager (HP SIM) running on a 
central management station (CMS), and manages one or more managed nodes 
in your network.
 
The following suite of ID-VSE products are supported on OpenVMS Version 
8.4:
 
HP Virtualization Manager
 
 
The Virtualization Manager software provides a framework for 
visualizing your virtual server environment (VSE) at different levels 
of detail. All the systems and workloads are displayed in a graphical 
view. The hierarchical relationships between systems and their current 
utilization are displayed on a single screen. It also allows you to 
access additional VSE Management software for management and 
configuration of systems and workloads. Virtualization Manager collects 
utilization data for processor, memory, network, and disk from OpenVMS 
managed nodes through the Utilization WBEM provider.
 
HP Capacity Advisor
 
 
The Capacity Advisor software provides capacity analysis and planning 
to help optimize the workloads across VSE for the highest utilization 
of server resources. It also provides scenario analysis to optimize the 
current server resources and plan for future workload expansion and 
server consolidation. Capacity Advisor collects utilization data for 
processor, memory, network, and disk from the OpenVMS managed nodes 
through the Utilization WBEM provider.
 
HP Global Workload Manager
 
 
HP Global Workload Manager (gWLM) is a multi-system, multi-OS workload 
manager that serves as an intelligent policy engine in the VSE 
software. It simplifies the deployment of automated workload management 
policies across multiple servers and provides centralized monitoring 
and improved server utilization to meet the service-level objectives.
 
On OpenVMS with Global Workload Manager, all the capabilities of iCAP, 
TiCAP can be automated based on the defined business policies. For 
example, if performance goals are not met, additional processors can be 
automatically turned on using TiCAP or usage rights can be dynamically 
moved from a partition.
 
Prerequisite
 
 
To use the Global Workload Manager, the gWLM agent must be running on 
the OpenVMS managed nodes.
 
  
Chapter 4 Performance Enhancements
This chapter describes new features relating to performance 
enhancements in this version of the HP OpenVMS operating system.
4.1 RAD Support (Integrity servers Only)
 
OpenVMS Version 8.4 has been enhanced to support RAD for cell-based 
Integrity server systems. This feature enables OpenVMS to utilize the 
advantages of cell-based systems configured with cell local memory 
(CLM). On systems with both CLM and interleaved memory (ILM) 
configured, OpenVMS allocates process memory from the CLM within a cell 
and schedules the process to run on a CPU within the same cell. The 
overall memory latency and bandwidth for the process is improved by 
reducing the frequency of a CPU in one cell referencing memory in 
another cell.
 
Prerequisite
 
 
To use the RAD support, CLM must be configured on the operating system 
using the Partition Manager software. The Partition Manager provides 
the system administrators with a graphical user interface (GUI) to 
configure and manage nPartitions on HP server systems. The Partition 
Manager is supported on HP-UX, Microsoft Windows, Red Hat Enterprise 
Linux, and SUSE Linux Enterprise Server. This software interacts with 
the user through a web browser running on a client system. The client 
system can be the same as the server system, or it can be a separate 
workstation, or a PC. Note that the Partition Manager does not run on 
OpenVMS. For more information and for software downloads, see:
 
 
http://docs.hp.com/en/PARMGR2/
 
Recommendation
 
 
OpenVMS recommends configuring systems with a combination of both CLM 
and ILM. Initially, configure 50% of the memory in each cell as CLM. 
For best performance, follow the hardware guidelines for configuring 
systems with combinations of CLM and ILM. For cell-based systems, the 
number of cells and the amount of ILM must be in power of 2.
 
By default, on a system configured with CLM, OpenVMS boots with RAD 
support turned on. RAD support can be turned off by setting the 
RAD_SUPPORT system parameter to 0. The recommended method of turning 
RAD support off is to configure all the memory on the system as ILM.
 
When ILM and CLM are both present on an Integrity server system, the 
ILM is seen as an additional RAD. Because all CPUs on the system have a 
similar average memory latency when accessing this memory, all CPUs are 
associated with this RAD. Note that there is no Alpha hardware that has 
both RADs and ILM, and thus this extra RAD never appears on Alpha.
 
For example, consider an rx7640 system with 2 cells, 16 GB of memory 
per cell, and 8 cores per cell. If you configure the system with 50% 
CLM per cell, this system boots and OpenVMS configures the system with 
3 RADs. RAD 0 contains the CLM and cores from the first cell. RAD 1 
contains the CLM and cores from the second cell. A third RAD (RAD 2) 
contains the ILM and all cores.
 
Although there are 3 RADs, processes are assigned only to the first 2 
RADs as home RADs. The RAD in which a core first appears is the RAD 
with the best memory access for the core. The $GETSYI system service 
and F$GETSYI lexical function enumerate this for the RAD_CPUS item code.
 
During system boot, the operating system assigns a base RAD from which 
shared and operating system data is allocated. The base RAD that the 
operating system assigns is the RAD with ILM because all CPUs have 
similar access to this memory. Non-paged pool is allocated from the 
base RAD. By default, per-RAD non-paged pool is turned off.
 
By default, global page faults are now satisfied with pages from the 
base RAD compared to the RAD of the CPU where the fault occurred. 
Because global sections can be accessed by many processes running on 
all RADs, pages are allocated from the RAD with ILM.
 
For more information about RAD support, see the HP OpenVMS Alpha Partitioning and Galaxy Guide.
4.1.1 Page Zeroing for RAD based Systems (Integrity servers and Alpha)
 
Within the idle loop, CPUs can "zero" the deleted pages of memory to 
satisfy the future demand-zero page faults. The ZERO_LIST_HI system 
parameter indicates the maximum number of zeroed pages that the 
operating system must keep zeroed. For systems with multiple RADs, 
ZERO_LIST_HI specifies the maximum number of zeroed pages per RAD.
4.1.2 SYS$EXAMPLES:RAD.COM (Integrity servers and Alpha)
 
The SYS$EXAMPLES:RAD.COM command procedure provides an example for 
using the RAD related F$GETSYI item codes, RAD_CPUS and RAD_MEMORY. 
This procedure has been updated to produce a more concise view of the 
RAD configuration for Integrity servers and Alpha systems.
 
 
  
    
       
      
$ @SYS$EXAMPLES:RAD 
 
Node: SYS123 Version: V8.4  System: HP rx7640  (1.60GHz/12.0MB) 
 
RAD   Memory (GB)   CPUs 
===   ===========   =============== 
  0        3.99     0-7 
  1        3.99     8-15 
  2        7.99     0-15 
 
 |   
This procedure is run on an rx7640 system with 16 GB of memory, with 
each cell configured to have 50% CLM. A portion of the CLM from each 
cell and the ILM memory may be allocated for the console, and thus not 
available for use by the operating system. As a result, you do not see 
4 GB for the first 2 RADs and 8 GB for the RAD with ILM.
4.1.3 RAD Memory Usage (Integrity servers and Alpha)
 
To determine the memory usage per RAD, use the SDA command SHOW 
PFN/RAD. This command reports the number of free and zeroed pages per 
RAD. The CPU is extensively used when you execute the SHOW PFN/RAD 
command. Hence, use SHOW PFN/RAD only on occasions if you want to check 
the memory usage per RAD.
 
  
Chapter 5 Disaster Tolerance and Cluster Features
This chapter describes new features relating to disaster tolerance and 
clusters of the OpenVMS operating system.
5.1 Cluster over IP
 
OpenVMS Version 8.4 has been enhanced with the Cluster over IP feature. 
Cluster over IP provides the ability to form clusters beyond a single 
LAN or VLAN segment using the industry standard Internet Protocol. This 
feature provides improved disaster tolerance.
 
Cluster over IP enables you to:
 
  - Form a cluster between nodes in data centers that are in different 
  LAN or VLAN segment.
  
 - Form geographically distributed disaster tolerant cluster with IP 
  network.
  
 - Improve total cost of ownership.
  
The cluster over IP feature includes:
 
  - PEdriver to use UDP protocol in addition to IEEE 802.3 LAN for 
  system communication services (SCS) packets.
  
 - Reliable delivery of SCS packets by PEDRIVER using User Datagram 
  Protocol (UDP).
  
 - IP multicast and optional IP unicast to discover nodes in an IP 
  only environment.
  
 - Ability to load TCP/IP services during boot time to enable 
  formation of cluster in IP only environment.
  
HP TCP/IP services for OpenVMS 5.7 is required to use the cluster over 
IP feature.
 
 
  Note 
The Cluster over IP feature is also referred to as IP Cluster 
Interconnect (IPCI). 
     | 
   
 
For more information, see the Guidelines for OpenVMS Cluster 
Configurations and HP OpenVMS Cluster Systems guides.
 
5.2 Volume Shadowing for OpenVMS Enhancements
This section describes the new features for HP Volume Shadowing for 
OpenVMS Version 8.4. For information about these new features, see the 
HP Volume Shadowing for OpenVMS.
5.2.1 Support for Six-Member Shadow set
 
OpenVMS Version 8.4 supports a maximum of six-member shadow set 
compared to the previous three-member shadow set. This is aimed at 
multi-site disaster tolerant configuration. With three member shadow 
set, a three site disaster tolerant configuration will have only one 
shadow member per site. In this case, during failure of two sites, the 
member left out in the surviving site becomes a single point of 
failure. With six-member shadow set support, you can have two members 
of a shadow set in each of the three sites providing high availability.
5.2.2 New DISMOUNT Keyword for HBMM
 
All the 12 write bitmaps are used by shadowing as multiuse bitmaps, 
thus removing the single point of failure of single minicopy master 
bitmaps. To invoke this feature, a new keyword is added to the SET 
SHADOW/POLICY command:
DISMOUNT=n
 
where; n specifies the number of HBMM bitmaps to convert to multiuse 
bitmaps when a member is dismounted from a shadow set with the 
$DISMOUNT/POLICY=MINICOPY command.
5.2.3 Fast Minicopy and Minimerge
 
 Shadowing has been enhanced to increase the performance of shadow 
 minicopy and minimerge using "looking ahead" of the next bit that is 
 set in the write bitmap.
 
The number of QIOs between SHADOW_SERVER and SYS$SHDRIVER is 
drastically reduced using this method, thus allowing minicopy and 
minimerge to complete faster.
5.2.4 New Qualifiers for SET SHADOW
Following are the new parameters added to the SET SHADOW command:
 
  - /DISABLE=SPLIT_READ_LBNS - disables the "split read lbn" behavior 
  and as a result the reads are alternated between the source shadow set 
  members having the same read_cost and device queue length.
  
 - /ENABLE=SPLIT_READ_LBNS - logically divides the shadow set members 
  having the same read cost into equal groups of logical block numbers 
  (LBNs). When reads are performed to the virtual unit, they are read 
  from the corresponding LBN group disk. This results in the maximum 
  usage of the controller read-ahead cache.
  
 - /STALL=WRITES[=nnn] - where nnn equals the number of seconds to 
  stall the write. This qualifier will be useful if a user wants to stall 
  the write operations for "nnn" seconds. If no value is specified for 
  "nnn" seconds, the lock is released after SHADOW_MBR_TMO seconds. The 
  default is SHADOW_MBR_TMO.
  
 - /NOSTALL=WRITES[=nnn] - releases the write lock after "nnn" seconds 
  so that the write operation continues on the shadow set.
  
5.2.5 Performance Improvement in Write Bitmaps
Write Bitmaps (WBM) is a feature used by OpenVMS during minimerge and 
minicopy operations of Shadowing minimerge and minicopy. Information, 
about which blocks on a disk are written, is transmitted to other nodes 
within the cluster. The following updates have been made in this 
release.
 
Asynchronous SetBit Messages
 
There can be multiple master bitmap nodes for a shadow set. Currently, 
SetBit messages are sent to the multiple master bitmap nodes 
synchronously. Only when the response for the SetBit message is 
received from the first remote master bitmap node, is the message sent 
to the next master bitmap node. When done with all of the remote master 
bitmap nodes, the I/O is resumed. SetBit messages are now sent to all 
multiple master bitmap nodes asynchronously. I/O operation is resumed 
when the responses from all the master bitmap nodes are received. This 
reduces the stall time of the I/O operation by the write bitmap code.
 
Reduced SetBit Messages for Sequential I/O
 
If sequential writes occur to a disk, it results in sending Setbit 
messages that set sequential bits in the remote bitmap. The WBM code 
will now recognize where a number of prior bits in the bitmap have 
already been set. In this scenario, the WBM code will set additional 
bits so that if sequential writes should continue, fewer Setbit 
messages are required. Assuming the sequential I/O continues, the 
number of Setbit messages will be reduced by about a factor of 10 and 
thus improve the I/O rate for sequential writes.
 
  
Chapter 6 Storage Devices and I/O Support
This chapter describes the support added for the Storage devices and 
I/O controllers in this version of the OpenVMS operating system.
6.1 8 Gb Fibre Channel PCIe Adapter Support
 
Support for 1-port 8 Gb Fibre Channel Adapter (AH400A) and 2-port 8 Gb 
Fibre Channel Adapter (AH401A) PCI-Express (PCIe) has been added. For 
more information, see:  
http://www.hp.com/products1/serverconnectivity/storagesnf2/8gbfibre/index.html
 
OpenVMS also supports 2-port 8 Gb Fibre Channel Mezzanine Card for HP 
BladeSystem c-Class (product number 451871-B21).
6.2 PCI Sound Card HP AD317A PCI Support
 
Limited support for HP AD317A PCI sound card on Integrity servers has 
been added.
6.3 Storage Devices and I/O Controllers Supported After the Initial OpenVMS V8.3--1H1 Release
 
The following list of storage devices and I/O controllers were released 
after OpenVMS V8.3--1H1 was shipped. V8.3--1H1 support was introduced 
through patch kits; all of these devices will be qualified and 
supported by OpenVMS Version 8.4.
 
  - HP Smart Array P700m
  
 - HP Smart Array P411
  
 - HP StorageWorks MDS600
  
 - HP StorageWorks D2D Backup Systems
  
 - HP StorageWorks Ultrium Tape blades
  
 - HP StorageWorks Secure Key Manager (SKM)
  
 - MSL LTO4 Encryption Kit
  
 - HP StorageWorks MSA2000fc Modular Smart Array (FC)
  
 - HP StorageWorks MSA2000sa Modular Smart Array (SAS)
  
 - 2xGigE LAN (Intel 82575), 1x10/100/1000 Management LAN
  
 - EVA4400
  
 - EVA6400/8400
  
 - P2000 G3
  
  
 |