![]() |
![]() HP OpenVMS Systems Documentation |
![]() |
Guidelines for OpenVMS Cluster Configurations
5.4 Determining Disk Availability Requirements
For storage subsystems, availability is determined by the availability
of the storage device as well as the availability of the path to the
device.
Some costs are associated with optimizing your storage subsystems for higher availability. Part of analyzing availability costs is weighing the cost of protecting data against the cost of unavailable data during failures. Depending on the nature of your business, the impact of storage subsystem failures may be low, moderate, or high.
Device and data availability options reduce and sometimes negate the
impact of storage subsystem failures.
Depending on your availability requirements, choose among the availability optimizers described in Table 5-4 for applications and data with the greatest need.
5.5 CI Based Storage
The CI interconnect provides the highest OpenVMS Cluster availability
with redundant, independent transmit-and-receive CI cable pairs. The CI
offers multiple access paths to disks and tapes by means of dual-ported
devices between HSC or HSJ controllers.
The following controllers and devices are supported by the CI interconnect:
5.6 DSSI StorageDSSI-based configurations provide shared direct access to storage for systems with moderate storage capacity. The DSSI interconnect provides the lowest-cost shared access to storage in an OpenVMS Cluster.
The storage tables in this section may contain incomplete lists of
products.
DSSI configurations support the following devices:
Reference: RZ, TZ, and EZ SCSI storage devices are
described in Section 5.7.
The Small Computer Systems Interface (SCSI) bus is a storage
interconnect based on an ANSI industry standard. You can connect up to
a total of 8 or 16 nodes (3 of which can be CPUs) to the SCSI bus.
The following devices can connect to a single host or multihost SCSI bus:
The following devices can connect only to a single host SCSI bus:
5.8 Fibre Channel Based Storage
The Fibre Channel interconnect is a storage interconnect that is based
on an ANSI industry standard.
The HSG storage controllers can connect to a single host or to a
multihost Fibre Channel interconnect.
Host-based storage devices can be connected locally to OpenVMS Cluster member systems using local adapters. You can make this locally connected storage available to other OpenVMS Cluster members by configuring a node as an MSCP server.
You can use local adapters to connect each disk to two access paths
(dual ports). Dual porting allows automatic failover of disks between
nodes.
Locally connected storage devices attach to a system's internal bus. Alpha systems use the following internal buses:
VAX systems use the following internal buses:
5.9.2 Local AdaptersFollowing is a list of local adapters and their bus types:
Chapter 6
|
The V7.2-2S1 kit provides support for failover between local and MSCP served paths to SCSI disk devices. This capability is enabled by setting the MPDEV_REMOTE system parameter to 1. The default value of MPDEV_REMOTE is 0. MPDEV_REMOTE must stay set to 0 unless the V7.2-2S1 kit is installed. The V7.2-2S1 kit includes fixes and changes that are beneficial even if MPDEV_REMOTE is left off, such as avoiding controller failover when a device is mounted. This SCSI multipath feature may be incompatible with some third-party disk caching, disk shadowing, or similar products. Compaq advises that you not use such software on SCSI devices that are configured for multipath failover (for example, SCSI devices that are connected to HSZ70 and HSZ80 controllers in multibus mode) until this feature is supported by the producer of the software. Refer to Section 6.2 for important requirements and restrictions for using the multipath SCSI function. Note that the Fibre Channel and parallel SCSI interconnects are shown generically in this chapter. Each is represented as a horizontal line to which the node and storage subsystems are connected. Physically, the Fibre Channel interconnect is always radially wired from a switch, as shown in Figure 7-1. Parallel SCSI can be radially wired to a hub or can be a daisy-chained bus. The representation of multiple SCSI disks and SCSI buses in a storage subsystem is also simplified. The multiple disks and SCSI buses, which one or more HSZx or HSGx controllers serve as a logical unit to a host, are shown in the figures as a single logical unit. |
The following topics are presented in this chapter:
A multipath SCSI configuration provides failover from one path to a device to another path to the same device. Multiple paths to the same device increase the availability of that device for I/O operations. Multiple paths also offer higher aggregate performance. Figure 6-1 shows a multipath SCSI configuration. Two paths are configured from a computer to the same virtual storage device.
Multipath SCSI configurations can use either parallel SCSI or Fibre Channel as the storage interconnect, as illustrated by Figure 6-1.
Two or more paths to a single device are called a multipath set. When the system configures a path to a device, it checks for an existing device with the same name but a different path. If such a device is found, and multipath support is enabled, the system either forms a multipath set or adds the new path to an existing set. If multipath support is not enabled, then no more than one path to a device is configured.
The system presents a multipath set as a single device. The system selects one path to the device as the "current" path, and performs all I/O over this path until there is a failure or the system manager requests that the system switch to another path.
Multipath SCSI support provides the following types of failover:
Direct SCSI to direct SCSI failover requires the use of multiported
SCSI devices. Direct SCSI to MSCP served failover requires multiple
hosts per SCSI bus, but does not require multiported SCSI devices.
These two failover types can be combined. Each type and the combination
of the two are described next.
6.1.1 Direct SCSI to Direct SCSI Failover
Direct SCSI to direct SCSI failover can be used on systems with multiported SCSI devices. The dual HSZ70, the HSZ80 and the HSG80 are examples of multiported SCSI devices. A multiported SCSI device can be configured with multiple ports on the same physical interconnect so that if one of the ports fails, the host can continue to access the device through another port. This is known as transparent failover mode and has been supported by OpenVMS since Version 6.2.
OpenVMS Version 7.2 introduced support for a new failover mode in which the multiported device can be configured with its ports on different physical interconnects. This is known as multibus failover mode.
The HSx failover modes are selected by HSx console commands. Transparent and multibus modes are described in more detail in Section 6.3.
Figure 6-1 is a generic illustration of a multibus failover configuration.
Configure multiple direct SCSI paths to a device only when multipath support is enabled on all connected nodes, and the HSZ/G is in multibus failover mode. |
The two logical disk devices shown in Figure 6-1 represent virtual storage units that are presented to the host by the HSx controller modules. Each logical storage unit is "on line" to one of the two HSx controller modules at a time. When there are multiple logical units, they can be on line to different HSx controllers so that both HSx controllers can be active at the same time.
In transparent mode, a logical unit switches from one controller to the other when an HSx controller detects that the other controller is no longer functioning.
In multibus mode, as shown in Figure 6-1, a logical unit switches from one controller to the other when one of the following events occurs:
Figure 6-1 Multibus Failover Configuration
Note the following about Figure 6-1:
The multibus configuration offers the following advantages over transparent failover:
OpenVMS provides support for multiple hosts that share a SCSI bus. This is known as a multihost SCSI OpenVMS Cluster system. In this configuration, the SCSI bus is a shared storage interconnect. Cluster communication occurs over a second interconnect (LAN, DSSI, CI, or MEMORY CHANNEL).
Multipath support in a multihost SCSI OpenVMS Cluster system enables failover from directly attached SCSI storage to MSCP served SCSI storage, as shown in Figure 6-2.
Figure 6-2 Direct SCSI to MSCP Served Configuration With One Interconnect
Note the following about this configuration:
Multipath support in such a multihost SCSI OpenVMS Cluster system also enables failover from MSCP served SCSI storage to directly attached SCSI storage. For example, the following sequence of events can occur on the configuration shown in Figure 6-2:
In this document, the capability to fail over from direct SCSI to MSCP served paths implies the ability to fail over in either direction between direct and served paths. |
In a multihost SCSI OpenVMS cluster system, you can increase storage availability by configuring the cluster for both types of multipath failover (direct SCSI to direct SCSI and direct SCSI to MSCP served SCSI), as shown in Figure 6-3.
Figure 6-3 Direct SCSI to MSCP Served Configuration With Two Interconnects
Note the following about this configuration:
This configuration provides the advantages of both direct SCSI failover
and direct to MSCP served failover.
6.2 Configuration Requirements and Restrictions
The requirements for multipath SCSI and FC configurations are presented in Table 6-1.
Component | Description |
---|---|
Host adapter | For parallel SCSI, the KZPBA-CB must be used. It is the only SCSI host adapter that supports multipath failover on OpenVMS. |
Alpha console firmware | For systems with HSZ70 and HSZ80, the minimum revision level is 5.3 or 5.4, depending on your AlphaServer. For systems with HSG80, the minimum revision level is 5.4 |
Controller firmware | For HSZ70, the minimum revision level is 7.3; for HSZ80, it is 8.3; for HSG80, it is 8.4. |
Controller module mode | Must be set to multibus mode. The selection is made at the HS x console. |
Full connectivity |
All hosts that are connected to an HS
x in multibus mode must have a path to both HS
x controller modules. This is because hosts that are connected
exclusively to different controllers will switch the logical unit back
and forth between controllers, preventing any I/O from executing.
To prevent this from happening, always provide full connectivity from hosts to controller modules. If a host's connection to a controller fails, then take one of the following steps to avoid indefinite path switching:
|
Allocation classes |
For parallel SCSI, a valid HSZ allocation class is required (refer to
Section 6.5.3). If a SCSI bus is configured with HSZ controllers only,
and all the controllers have a valid HSZ allocation class, then it is
not necessary to adhere to the older SCSI device naming rules for that
bus. That is, the adapters do not require a matching port allocation
class, or a matching node allocation class and matching OpenVMS adapter
device names.
However, if there are non-HSZ devices on the bus, or HSZ controllers without an HSZ allocation class, then the standard rules for node and port allocation class assignments and controller device names for shared SCSI buses must be followed. Booting from devices with an HSZ allocation class is supported on all AlphaServers that support the KZPBA-CB except for the AlphaServer 2 x00(A). The controller allocation class is not used for FC devices. |
The restrictions for multipath FC and SCSI configurations are presented in Table 6-2.
Component | Description |
---|---|
Devices supported |
DKDRIVER disk devices attached to HSZ70, HSZ80, and HSG80 controller
modules are supported. Other device types, such as tapes, and generic
class drivers, such as GKDRIVER, are not supported.
Note that under heavy load, a host-initiated manual or automatic switch from one controller to another may fail on an HSZ70 or HSZ80 controller. Testing has shown this to occur infrequently. This problem has been fixed for the HSZ70 with the firmware HSOF V7.7 and later versions. The problem will be fixed for the HSZ80 in a future release. This problem does not occur on an HSG80 controller. |
Mixed-version and mixed-architecture clusters |
All hosts that are connected to an HSZ or HSG in multibus mode must be
running OpenVMS Version 7.2 or higher.
As long as MPDEV_REMOTE is off, you can install this kit (<REFERENCE>(sekvmsv)) on any subset of the V7.2-2 nodes in your cluster. This is the way to do a rolling upgrade of this kit in your cluster. Before you set MPDEV_REMOTE to 1 on a system, then all systems that share direct access with this system to any SCSI/Fibre Channel disk must also be running <REFERENCE>(sekvmsv). Because this kit requires V7.2-2, all these nodes must be running V7.2-2. In particular, such a node cannot be running V7.3. If you enable MPDEV_REMOTE on one system, Compaq recommends that you enable it on all systems that have direct access to shared SCSI/Fibre Channel devices, which results in higher data availability. Perhaps more importantly, this is the configuration that has gotten the majority of testing. But there are no known problems if MPDEV_REMOTE is enabled on a subset of such nodes. |
SCSI to MSCP failover
MSCP to SCSI failover |
Multiple hosts must attached to SCSI disk devices via a shared SCSI bus (either parallel SCSI or Fibre Channel). All the hosts on the shared SCSI bus must be running V7.2-2S1 and the MPDEV_REMOTE system parameter must be set to 1 on these hosts. |
Previous | Next | Contents | Index |