![]() |
![]() HP OpenVMS Systems Documentation |
![]() |
HP OpenVMS Version 8.2 Release Notes
4.18 OpenVMS Cluster Systems
The release notes in this section pertain to OpenVMS Cluster systems.
V8.2
With few exceptions, OpenVMS Cluster software provides the same
features on OpenVMS I64 systems as it offers on OpenVMS Alpha and
OpenVMS VAX systems.
V8.2 The following exceptions are temporary:
4.18.3 Permanent ExceptionsV8.2 OpenVMS Cluster software supports three proprietary interconnects on Alpha systems that are not supported on OpenVMS I64 systems: DSSI (DIGITAL Storage Systems Interconnect), CI (cluster interconnect), and Memory Channel. Although DSSI and CI storage cannot be directly connected to OpenVMS I64 systems, data stored on CI and DSSI disks (connected to Alpha systems) can be served to OpenVMS I64 systems in the same cluster. Multihost shared storage on a SCSI interconnect, commonly known as SCSI clusters, is not supported. (It is also not supported on OpenVMS Alpha systems for newer SCSI adapters.) However, multihost, shared storage on industry-standard Fibre Channel is supported.
4.18.4 Patch Kits Needed for Cluster CompatibilityBefore introducing an OpenVMS Version 8.2 system into an existing OpenVMS Cluster system, you must apply certain patch kits (also known as remedial kits) to your systems running earlier versions of OpenVMS. If you are using Fibre Channel, XFC, or Volume Shadowing, additional patch kits are required. Note that these kits are version specific. The versions listed in Table 4-1 are supported in either a warranted configuration or a migration pair configuration. For more information about these configurations, refer to either HP OpenVMS Cluster Systems or the HP OpenVMS Version 8.2 Upgrade and Installation Manual. Table 4-1 lists the facilities that require patch kits and the patch ID names. Each patch kit has a corresponding readme file with the same name (file extension is .README). You can either download the patch kits from the following web site (select the OpenVMS Software Patches option), or contact your HP support representative to receive the patch kits on media appropriate for your system:
1For operating guidelines when using VAX systems in a cluster, refer to Section 4.18.2.
Note that VAX systems cannot be in a cluster with I64 systems. For a
complete list of warranted groupings within a cluster, refer to the
HP OpenVMS Version 8.2 Upgrade and Installation Manual.
V7.3-2 OpenVMS Alpha Version 7.2-1 introduced the multipath feature, which provides support for failover between the multiple paths that can exist between a system and a SCSI or Fibre Channel device. OpenVMS Alpha Version 7.3-1 introduced support for failover between Fibre Channel multipath tape devices. This multipath feature can be incompatible with some third-party disk-caching, disk-shadowing, or similar products. HP advises that you do not use such software on SCSI or Fibre Channel devices that are configured for multipath failover until this feature is supported by the producer of the software. Third-party products that rely on altering the Driver Dispatch Table (DDT) of either the OpenVMS Alpha SCSI disk class driver (SYS$DKDRIVER.EXE), the OpenVMS Alpha SCSI tape class driver (SYS$MKDRIVER.EXE), or the SCSI generic class driver (SYS$GKDRIVER) may need to be modified in order to function correctly with the SCSI multipath feature. Producers of such software can now modify their software using new DDT Intercept Establisher routines introduced in OpenVMS Alpha Version 7.3-2. For more information about these routines, refer to the HP OpenVMS Alpha Version 7.3--2 New Features and Documentation Overview manual.
For more information about OpenVMS Alpha SCSI and Fibre Channel
multipath features, refer to Guidelines for OpenVMS Cluster Configurations.
V7.3-2 This note updates Table 8-3 (Data Requested by CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM) in the HP OpenVMS Cluster Systems manual. The documentation specifies a limit on the number of hexadecimal digits you can use for computers with direct access to the system disk. The limit is correct for VAX computers but not for Alpha computers. The command procedure prompts for the following information:
The documentation currently states: Press Return to accept the procedure-supplied default, or specify a name in the form SYSx:
The limit on the range of hexadecimal values with direct access to the system disk is correct for VAX computers. For Alpha computers with direct access to the system disk, the valid range of hexadecimal values is much larger. It includes both the VAX range of 1 through 9 or A through D, and also the range 10 through FFFF. Note that SYSE and SYSF are reserved for system use.
The HP OpenVMS Cluster Systems manual will include this information in its next
revision.
V7.3-1 In rare cases, in an OpenVMS Cluster configuration with both CI and multiple FDDI, 100 Mb/s or Gb/s Ethernet-based circuits, you might observe that SCS connections are moving between CI and LAN circuits at intervals of approximately 1 minute. This frequent circuit switching can result in reduced cluster performance and may trigger mount verification of shadow set members. PEdriver can detect and respond to LAN congestion that persists for a few seconds. When it detects a significant delay increase or packet losses on a LAN path, the PEdriver removes the path from use. When it detects that the path has improved, it begins using it again. Under marginal conditions, the additional load on a LAN path resulting from its use for cluster traffic may cause its delay or packet losses to increase beyond acceptable limits. When the cluster load is removed, the path might appear to be sufficiently improved so that it will again come into use. If a marginal LAN path's contribution to the LAN circuit's load class increases the circuit's load class above the CI's load class value of 140 when the marginal path is included (and, conversely, decreases the LAN circuit's load class below 140 when the path is excluded), SCS connections will move between CI and LAN circuits. You can observe connections moving between LAN and CI circuits by using SHOW CLUSTER with the CONNECTION and CIRCUITS classes added.
Workarounds
If excessively frequent connection moves are observed, you can use one of the following workarounds:
4.18.8 Gigabit Ethernet Switch Restriction in an OpenVMS Cluster SystemPermanent Restriction Attempts to add a Gigabit Ethernet node to an OpenVMS Cluster system over a Gigabit Ethernet switch fail if the switch settings are different from the setting of the Gigabit Ethernet adapter with respect to autonegotiation. If the switch is set to autonegotiation, the adapter must be as well, and conversely. Most Gigabit Ethernet adapters default to having autonegotiation enabled. An exception is the DEGXA on Alpha systems where the EGn0_MODE console environment variable contains the desired setting, which must match the switch setting. When an attempt to add a node fails because of the switch and adapter mismatch, the messages that are displayed can be misleading. If you are using CLUSTER_CONFIG.COM to add the node and the option to install a local page and swap disk is selected, the problem might look like a disk-serving problem. The node running CLUSTER_CONFIG.COM displays the message "waiting for node-name to boot," while the booting node displays "waiting to tune system." The list of available disks is never displayed because of a missing network path. The network path is missing because of the autonegotiation mismatch between the Gigabit adapter and the switch. To avoid this problem, disable autonegotiation on the new node's Gigabit Ethernet adapter, as follows:
After this initial configuration, the LAN_FLAGS system parameter
setting for autonegotiation must be consistent with the switch settings
for all Gigabit Ethernet adapters in the system. If not all adapters
need to disable autonegotiation, the run-time settings must be set
appropriately in the LANCP device database. See the LANCP chapter in
the HP OpenVMS System Management Utilities Reference Manual for details.
V7.3-1 While the INITIALIZE command is in progress on a device in a Fibre Channel multipath tape set, multipath failover to another member of the set is not supported. If the current path fails while another multipath tape device is being initialized, retry the INITIALIZE command after the tape device fails over to a functioning path.
This restriction will be removed in a future release.
V7.3-1 Automatic path switching is not implemented in OpenVMS Alpha Version 7.3-1 or higher for SCSI medium changers (tape robots) attached to Fibre Channel using a Fibre-to-SCSI tape bridge. Multiple paths can be configured for such devices, but the only way to switch from one path to another is to use manual path switching with the SET DEVICE/SWITCH command.
This restriction will be removed in a future release.
The following sections contain notes pertaining to OpenVMS Galaxy systems.
Note that OpenVMS Galaxy is supported on OpenVMS Alpha systems only.
V8.2 Because the HP OpenVMS Alpha Partitioning and Galaxy Guide is not being updated for OpenVMS Version 8.2, this note provides improved definitions of the word Galaxy, which depends on context.
4.19.2 OpenVMS Graphical Configuration Manager
The OpenVMS Graphical Configuration Manager (GCM) is now supported for
AlphaServer ES47/ES80/GS1280 Galaxy configurations. Previously, only
the Graphical Configuration Utility (GCU) was supported.
Permanent Restriction
On AlphaServer ES40 Galaxy systems, you cannot write a raw
(uncompressed) dump from instance 1 if instance 1's memory starts at or
above 4 GB (physical). Instead, you must write a compressed dump.
V7.3-1 When you implement Galaxy on an AlphaServer ES40 system, you must turn off Fast Path on instance 1. Do this by setting the SYSGEN parameter FAST_PATH to 0 on that instance.
If you do not turn off Fast Path on instance 1, I/O on instance 1 will
hang when instance 0 is rebooted. This hang will continue until the PCI
bus is reset and instance 1 rebooted. If there is shared SCSI or Fibre
Channel, I/O will hang on the sharing nodes and all paths to those
devices will be disabled.
V8.2 Version 3.2D is the recommended version of OpenVMS Management Station for OpenVMS I64 Version 8.2 and OpenVMS Alpha Version 8.2. However, OpenVMS Management Station is backward compatible with OpenVMS Version 6.2 and higher.
The OpenVMS Version 8.2 installation includes OpenVMS Management
Station Version 3.2D.
V7.3-2 If you create eight or more volatile subkeys in a key tree and then reboot a standalone system or a cluster, the OpenVMS Registry server can corrupt a Version 2 format Registry database when the server starts up after the reboot. To avoid this problem, do one of the following:
Note that Advanced Server for OpenVMS and COM for OpenVMS do not create
volatile keys.
V7.3-2 In OpenVMS Version 7.1 and higher, if you execute the DCL command DIRECTORY/SECURITY or DIRECTORY/FULL for files that contain Advanced Server (PATHWORKS) access control entries (ACEs), the hexadecimal representation for each Advanced Server ACE is no longer displayed. Instead, the total number of Advanced Server ACEs encountered for each file is summarized in the message, "Suppressed n PATHWORKS ACEs."
To display the suppressed ACEs, use the SHOW SECURITY command. You must
have the SECURITY privilege to display these ACEs. Note that, in
actuality, the command displays OpenVMS ACEs, including the %x86 ACE
that reveals the Windows NT® security descriptor information. The
Windows NT security descriptor information pertains to the Advanced
Server.
V7.3-2 The server management process, SMHANDLER, now starts automatically on Alpha systems that support it. System managers should remove references to the obsolete startup file, SYS$STARTUP:SYS$SMHANDLER_STARTUP.COM, from SYSTARTUP_VMS.COM or other site-specific startup files. This reference has been removed from SYSTARTUP_VMS.TEMPLATE.
Background: What is SMHANDLER?
On certain Alpha systems, the server management process is started to assist the system firmware in reporting and responding to imminent hardware failures. Failure conditions vary but typically include over-temperature conditions, fan failures, or power supply failures. SMHANDLER may report warning conditions to OPCOM, and may initiate a shutdown of OpenVMS if system firmware is about to power off a failing system. In most situations, a controlled shutdown of OpenVMS would be less disruptive than abrupt loss of system power.
To ensure the longest possible up time, system managers can set the
POWEROFF system parameter to 0. This prevents SMHANDLER from shutting
down OpenVMS on a failing system but does not prevent system firmware
from powering off the system.
V7.3-2
Previously, enabling SYSGEN audits, or alarms, did not provide any
audits or alarms with information about the parameters being modified.
As of OpenVMS Version 7.3-2, this problem is corrected. Audits or
alarms now provide a list of the changed parameters along with their
old and new values.
V8.2 The following changes have been made to the SYSMAN DUMP_PRIORITY command:
|