HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Version 8.3--1H1
for Integrity Servers
New Features and Release Notes


Previous Contents

1.3 General Application Compatibility Statement

OpenVMS is consistently in its policy that published APIs are supported on all subsequent releases. It is unlikely that applications that use published APIs will require changes to support a new release of OpenVMS. APIs might be "retired" and thus be removed from the documentation; however, the API continues to be available on OpenVMS as an undocumented interface.

1.4 Obtaining Patch Kits

Patch kits, also known as remedial kits, for HP products are available on line at the HP IT Resource Center (ITRC). Use of the ITRC patch download site requires user registration and login. Registration is open to all users and no service contract is required. You can register and log in from the following URL:


http://www1.itrc.hp.com/service/patch/mainPage.do

You can also use FTP to access patches from the following location:


ftp://ftp.itrc.hp.com/openvms_patches/i64/V8.3-1H1

1.5 Networking Options

OpenVMS provides customers with the flexibility to choose their own network protocol. Whether you require DECnet or TCP/IP, OpenVMS allows you to choose the protocol or combination of protocols that work best for your network. OpenVMS can operate with both HP and third-party networking products.

During the main installation procedure for OpenVMS Version 8.3--1H1, you have the option of installing the following supported HP networking software:

  • Either HP DECnet-Plus Version 8.3--1H1 for OpenVMS or HP DECnet Phase IV for OpenVMS. (Note that these DECnet products cannot run concurrently on your system.)
    DECnet-Plus contains all the functionality of the DECnet Phase IV product, plus the ability to run DECnet over TCP/IP or OSI protocols.
    Standard support for DECnet Phase IV is provided to customers. For more information about the Prior Version Support service, see Section 1.2.
  • HP TCP/IP Services for OpenVMS Version 5.6ECO2
    TCP/IP Services and DECnet can run concurrently on your system. Once you have installed HP DECnet-Plus for OpenVMS and TCP/IP Services on your system, you can run DECnet applications and OSI applications, or both, over your TCP/IP network. For more information about running DECnet over TCP/IP (RFC 1859) and OSI over TCP/IP (RFC 1006), see the DECnet-Plus for OpenVMS Management Guide.

Alternatively, after you install OpenVMS, you can install your choice of another third-party networking product that runs on OpenVMS Version 8.3--1H1.

For information about how to configure and manage your HP networking software after installation, see the TCP/IP, DECnet-Plus, or DECnet documentation. The manuals are available in online format on the OpenVMS Documentation CD, the Online Documentation Library CD, and the OpenVMS Documentation website:


http://h71000.www7.hp.com/doc/index.html

To order printed manuals from HP, see the Preface.

1.6 System Event Log on Integrity Servers

HP Integrity servers maintain a System Event Log (SEL) within system console storage, and OpenVMS I64 automatically transfers the contents of the SEL into the OpenVMS error log. If you are operating from the console during a successful boot operation, you might see a message indicating that the Baseboard Management Controller (BMC) SEL is full. You can safely continue when the BMC SEL is full by following the prompts; OpenVMS automatically processes and clears the contents of the SEL.

1.7 Firmware for Integrity Servers

OpenVMS Version 8.3--1H1 was tested with the latest firmware for each of the supported Integrity servers.

For the entry-class Integrity servers, HP recommends that use the most current system firmware. For information about updating the system firmware for entry-class Integrity servers, see the HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual. (For rx7620, rx8620, and Superdome servers, call HP Customer Support to update your firmware.)

Table 1-1 lists the recommended firmware versions for entry-class Integrity servers:

Table 1-1 Firmware Versions for Entry-Class Integrity Servers
System System
Firmware
BMC
Firmware
MP
Firmware
DHCP
Firmware
rx1600 4.27 4.01 E.03.30 N/A
rx1620 4.27 4.01 E.03.30 N/A
rx2600 2.31 1.53 E.03.30 N/A
rx2620 4.27 4.04 E.03.30 N/A
rx4640 4.28 4.04 E.03.30 1.10
rx2660* 1.05 5.06 F.01.58 N/A
rx3600* 2.03 5.14 F.01.58 N/A
rx6600* 2.03 5.14 F.01.58 N/A

*If you have Intel Itanium 9100 processors on your rx2660, rx3600, or rx660, you need firmware that is at least one version greater than the ones listed here.

For cell-based servers, you must access the MP Command Menu and issue the sysrev command to list the MP firmware revision level. The sysrev command is available on all HP Integrity servers that have an MP. Note the EFI info fw command does not display the Management Processor (MP) firmware version on cell-based Integrity servers.

To check firmware version information on an entry-class Integrity server that does not have the MP, enter the info fw command at the EFI prompt. Note the following example:


Shell> info fw

FIRMWARE INFORMATION

   Firmware Revision: 2.13 [4412]          (1)

   PAL_A Revision: 7.31/5.37
   PAL_B Revision: 5.65
   HI Revision: 1.02

   SAL Spec Revision: 3.01
   SAL_A Revision: 2.00
   SAL_B Revision: 2.13

   EFI Spec Revision: 1.10
   EFI Intel Drop Revision: 14.61
   EFI Build Revision: 2.10

   POSSE Revision: 0.10

   ACPI Revision: 7.00

   BMC Revision: 2.35                       (2)
   IPMI Revision: 1.00
   SMBIOS Revision: 2.3.2a
   Management Processor Revision: E.02.29   (3)

  1. The system firmware revision is 2.13.
  2. The BMC firmware revision is 2.35.
  3. The MP firmware revision is E.02.29.

The HP Integrity rx4640 server contains Dual Hot Plug Controller (DHPC) hardware with upgradable firmware. To check the current version of your DHPC firmware, enter the EFI command info chiprev , as shown in the following example. The hot-plug controller version will be displayed. A display of 0100 indicates version 1.0; a display of 0110 indicates version 1.1.


Shell> info chiprev

CHIP REVISION INFORMATION

   Chip                  Logical     Device       Chip
   Type                     ID         ID       Revision
   -------------------   -------     ------     --------
   Memory Controller         0       122b         0023
   Root Bridge               0       1229         0023
     Host Bridge          0000       122e         0032
     Host Bridge          0001       122e         0032
     Host Bridge          0002       122e         0032
     Host Bridge          0004       122e         0032
      HotPlug Controller     0          0         0110
     Host Bridge          0005       122e         0032
      HotPlug Controller     0          0         0110
     Host Bridge          0006       122e         0032
     Other Bridge            0          0         0002
       Other Bridge          0          0         0008
         Baseboard MC        0          0         0235

For instructions on how to access and use EFI, see the HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual. For more information, refer to the hardware documentation that is provided with your server.

For instructions on upgrading your firmware for your entry-class Integrity servers, refer to the HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual. To upgrade firmware for the rx7620, rx8620, or Superdome, contact HP Customer Support.

1.8 Release Notes on Booting the System

The following release notes pertain to booting the OpenVMS I64 system.

1.8.1 Booting from the Installation DVD

On I64 systems with the minimum amount of supported memory, the following message appears when booting from the installation DVD:


********* XFC-W-MemmgtInit Misconfigure Detected ********
XFC-E-MemMisconfigure MPW_HILIM + FREEGOAL > Physical Memory and no reserved memory for XFC
XFC-I-RECONFIG Setting MPW$GL_HILIM to no more than 25% of physical memory XFC-I-RECONFIG
Setting FREEGOAL to no more than 10% of physical memory
********* XFC-W-MemMisconfigure AUTOGEN should be run to correct configuration ********
********* XFC-I-MemmgtInit Bootstrap continuing ********

The message means that the system cache (XFC) initialization has successfully adjusted the SYSGEN parameters MPW_HILIM and FREEGOAL to allow caching to be effective during the installation. You can continue with the installation.

1.8.2 Setting Up I64 Systems to Reboot

An OpenVMS I64 system does not reboot automatically unless you have it set up to do so either by using EFI or by using the OpenVMS I64 Boot Manager utility.

For information about how to set up your I64 system to automatically reboot, refer to the HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual.

1.8.3 Booting with a Common Cluster System Disk

For configuring additional nodes to boot with a common cluster disk, refer to the CLUSTER_CONFIG_LAN utility described in the HP OpenVMS System Manager's Manual, Volume 1: Essentials.

For additional information about the I64 Boot Manager Boot Options Management Utility, see the HP OpenVMS System Manager's Manual, Volume 1: Essentials.

1.8.4 Booting from a Fibre Channel Storage Device

Many customers prefer to boot from a Fibre Channel (FC) storage device because of its speed and because it can serve as a common cluster system disk in a SAN. Booting from an FC storage device on OpenVMS I64 systems is significantly different from booting from an FC storage device on OpenVMS Alpha systems.

For instructions on how to configure and boot from an FC device on OpenVMS I64 systems, see the Fibre Channel appendix of HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual.

1.8.5 OpenVMS I64 Boot Manager Utility: Adding Multipath Fibre Channel Disk Devices

The OpenVMS Boot Manager utility, BOOT_OPTIONS.COM, is used to specify a list of boot and dump devices in a SAN. When you add a multipath Fibre Channel disk device to the list, all paths to the device found in the SAN, including redundant paths, are listed.

1.8.6 Fibre Channel Boot Disk: Simplified Setup Process

For OpenVMS I64 Version 8.3-1H1, the process of setting up a Fibre Channel boot device requires the use of the OpenVMS I64 Boot Manager, BOOT_OPTIONS.COM, to specify values to the EFI Boot Manager. This process is automated by the OpenVMS I64 Version 8.2--1 installation process, although the manual process is still available for those cases when it might be needed.

The OpenVMS I64 Version 8.3--1H1 installation process displays the name of a Fibre Channel disk as a boot device and prompts you to add the Boot Option. HP recommends that you accept this default. Alternatively, you can run the OpenVMS I64 Boot Manager after the installation or upgrade completes, as described in the HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual.

Note

If your system is a member of the rx1600, rx2600, or rx4600 family of servers and a Fibre Channel boot device is not listed in the EFI boot menu, you might experience a delay in the EFI initialization because the entire SAN is scanned.

Depending on the size of the SAN, this delay can range from several seconds to several minutes. Cell-based systems (the rx7620, rx8620, and Superdome families of servers) are not affected by this delay. This delay might occur when booting OpenVMS from the installation DVD for the first time on any OpenVMS I64 system.

For information on booting from a Fibre Channel boot device and updating the Fibre Channel adapter firmware, see the HP OpenVMS Version 8.3-1H1 for Integrity Servers Upgrade and Installation Manual.

1.9 HP DECwindows Motif

The following DECwindows Motif release notes are of interest to OpenVMS I64 users.

1.9.1 Connect Peripheral Devices Prior to Server Startup

To properly configure your system as a DECwindows X display server, you must have all the following peripheral components connected prior to startup:

  • Monitor
  • USB mouse
  • USB keyboard

Otherwise, the server system might not complete the device initialization process correctly. For example, starting up a server system without input devices (mouse and keyboard) results in a blank screen, and DECwindows does not start.

To correct this problem, connect all peripherals. Then restart the server, or reboot the system. Note that only one keyboard and one mouse are supported. For DECwindows to detect and use these peripherals, they must show as KBD0 and MOU0 when doing SHOW DEVICE KBD and SHOW DEVICE MOU commands.

1.9.2 Countdown Messages Displayed During Startup

When running DECwindows Motif in client-only mode (with no server configured, or with no mouse or keyboard connected), messages similar to the following might be displayed during startup:


Waiting for mouse...
Waiting for keyboard...

These messages indicate that device polling is underway and are informational only. They will disappear when the 15-second countdown is complete. This typically occurs on servers that incorporate built-in graphics as part of a Server Management Option, and no keyboard and mouse are connected.

To prevent the messages from displaying, and the 15-second delay, connect the input devices (USB mouse and USB keyboard) to the system prior to startup.

If you do not intend to use local graphics on the system, you can define the logical name in SYSTARTUP_VMS.COM as follows:


$ DEFINE/SYSTEM DECW$IGNORE_WORKSTATION TRUE

This prevents the DECwindows startup from attempting to start local graphics operation.

1.9.3 Optional Graphics

The ATI Radeon 7500 PCI option (HP part number AB551A) is supported on the entry-class Integrity servers for 2D multi-head and 3D operation. Refer to the Installation Guide for this device or Quickspecs for information about configuration and use.

1.9.4 Keyboard Support

The OpenVMS USB keyboard (HP part number AB552A) is supported on all Integrity systems supporting DECwindows graphics. The keyboard is packaged with a three-button thumbwheel mouse.

1.9.5 Firmware Update and Keyboards

If you update the firmware on your system before upgrading to OpenVMS Version 8.3-1H1, the system might think it has more than one keyboard. If this happens, DECWindows might not start or might not have a usable keyboard. This applies to the rx2660, rx3600, rx6600, and BL860c systems.

To correct this problem, install OpenVMS Version 8.3-1H1, then delete all copies of SYS$SPECIFIC:[SYSEXE]USB$UCM_DEVICES.DAT.

Note

Be sure not to delete SYS$COMMON:[SYSEXE]USB$UCM_DEVICES.DAT. The newer firmware causes the keyboard in the presently unsupported vKVM hardware to become visible to OpenVMS. OpenVMS Version 8.3-1H1 has code to ignore the latent keyboard, a feature that OpenVMS Version 8.3 did not have.


Chapter 2
OpenVMS Version 8.3--1H1 New Features

This chapter describes the new features provided in OpenVMS I64 Version 8.3--1H1.

2.1 ISV Applications and Binary Compatibility in HP OpenVMS Version 8.3-1H1

With the HP OpenVMS hardware release (Version 8.3-1H1), HP wants to assure ISVs and Layered Product developers that existing application binaries that work today on OpenVMS Version 8.3 will continue to run unchanged on OpenVMS Version 8.3-1H1. OpenVMS Version 8.3-1H1 is engineered to maintain binary compatibility with OpenVMS version 8.3. The thousands of applications written for OpenVMS version 8.3 will continue to work unchanged on OpenVMS version 8.3-1H1, through binary compatibility ensured by quality operating system engineering and lab testing.

Software recompilation is not required, nor will ISVs or Layered Product developers have to retest or requalify their products against Version 8.3-1H1 unless they wish to do so. If you have qualified on OpenVMS Version 8.3, you are already qualified on OpenVMS Version 8.3-1H1.

If ISVs or Layered Product developers uncover problems on OpenVMS Version 8.3-1H1 with products that work on OpenVMS Version 8.3, notify HP immediately. HP is committed to resolving any compatibility issues found with OpenVMS Version 8.3-1H1. .

2.2 New Item Code DVI$_ADAPTER_IDENT

On Alpha and I64 systems, this returns (as a string) the description of an adapter as defined in either SYS$SYSTEM:SYS$CONFIG.DAT or SYS$SYSTEM:SYS$USER_CONFIG.DAT. Note that this service does not read either of those files; those files are read into memory in response to the SYSMAN IO REBUILD command, which typically happens while a system is booting.

HP recommends a buffer size of 255 bytes to hold the identification string.

2.3 New item code DVI$_MOUNTCNT_CLUSTER

On Alpha and I64 systems, this item code returns (as a longword) the number of systems in a cluster that have a device mounted. Note that this new item code is not a direct replacement for the existing item code DVI$_MOUNTCNT. That item code returns the number of mounters for any device on the local system. The /SHARE qualifier to the MOUNT command can allow for more than one mounter.

2.4 HP Smart Array P800 Controller (AD335A) Support

OpenVMS V8.3-1H1 supports the Smart Array P800 16-port serial attached SCSI (SAS) controller with PCI-Express (PCIe). The HP StorageWorks 60 Modular Storage Array (MSA60) and 70 Modular Storage Array(MSA70) SAS storage enclosure can be connected to the external ports of this controller.

For additional information see the following web sites:


http://www.hp.com/products1/serverconnectivity/storagesnf2/sas/index.html
http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/index.html

2.5 HP 4Gb Fibre Channel Adapters (AD299A/AD355A) Support

OpenVMS V8.3-1H1 supports the 2-port 4Gb Fibre Channel Adapter and 1-port 4Gb Fibre Channel controller with PCI-Express (PCIe).

For additional information, see the following web site:


http://www.hp.com/products1/serverconnectivity/storagesnf2/4gbfibre/index.html

2.6 HP StorageWorks Ultrium 448c Tape Blade

OpenVMS V8.3-1H1 supports the Ultrium 448c half-height tape blade providing an integrated data protection solution for HP BladeSystem c-Class enclosures.

For additional information, see the following web site:


http://h18004.www1.hp.com/products/servers/storageworks/c-class/ultrium-448c/index.html

2.7 Storage and Network I/O Controllers supported after the Initial OpenVMS V8.3 Release

The following list of storage and network I/O controllers were released after OpenVMS V8.3 shipped. V8.3 support was introduced via patch kits; now all of these devices are fully qualified and supported by OpenVMS V8.3-1H1.

  • HP Smart Array P400
  • HP StorageWorks SB40c Storage Blade
  • 2-port 4Gb FC PCIe (AD300A)
  • 2-port 4Gb FC Mezzanine (403619-B21)
  • 1-port 4Gb FC & 1 port GigE HBA PCI-X combo (AD193A)
  • 2-port 4Gb FC & 2 port GigE HBA PCI-X combo (AD194A)
  • 1-Port 1000Base-R PCI-X (AD331A)
  • 2-Port 1000Base-R PCI-X (AD332A)

2.8 Graphics Console Support for Selected HP Integrity Server Platforms

OpenVMS V8.3-1H1 provides support for a graphics console on selected HP Integrity platforms. This functionality allows the user to connect a monitor, keyboard, and mouse directly to connectors provided for that purpose on the bulkhead of the system. Previous versions of OpenVMS allowed console connections only via a serial terminal connected to a serial port on the bulkhead or through a terminal emulator through the network interface.

Additionally, the user may select a graphics option card in a PCI slot to be the graphics console. Some HP Integrity server systems might impose limitations on this capability and on the number of graphics PCI cards for multiple-head graphics capabilities. For platform-specific, option-card limitations on graphics console and multiple graphics heads, see your HP Integrity server documents.

Note

Some platforms do not have an embedded graphics device. For those platforms, a graphics option card is required to affect a graphics console.


Previous Next Contents