HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

7.2.1 Fibre Channel Remedial Kits

Qualification of new Fibre Channel hardware and larger configurations is ongoing. New hardware and larger configurations may necessitate enhancements to the Fibre Channel support in OpenVMS. Between releases of OpenVMS, enhancements and corrections to Fibre Channel software are made available by means of remedial kits. Compaq recommends that you monitor the Fibre Channel web site ( http://www.openvms.compaq.com/openvms/fibre/ ) and the Compaq support web site ( http://h18000.www1.hp.com/support/ ) for updates for the operating system version you are running.

The latest version of each kit is always posted to the Compaq support web site.

7.2.2 Mixed-Version and Mixed-Architecture Cluster Support

Shared Fibre Channel OpenVMS Cluster storage is supported in both mixed-version and mixed-architecture OpenVMS Cluster systems. The following configuration requirements must be observed:

  • All hosts configured for shared access to the same storage devices must be in the same OpenVMS Cluster.
  • All hosts in the cluster require a common cluster communication interconnect, such as a LAN, CI, or DSSI.
  • All hosts with a direct connection to the FC must be running one of the following versions of OpenVMS Alpha:
    7.3
    7.2-1H1
    7.2-1
    7.2 with the DEC-AXPVMS-VMS72_HARDWARE-V0100--4.PCSI remedial kit and console revision 5.4 or higher, depending on the AlphaServer model (see the release notes)
  • All hosts that receive MSCP service of FC disks must have one of the following update kits (or later) installed:
    • Alpha systems
      • Version 6.2: ALPDRIV20_062
      • Version 7.1: ALPDRIV11_071
      • Version 7.1-2: DEC-AXPVMS-VMS712_DRIVER-V0300--4.PCSI
    • VAX systems
      • Version 6.2: VAXDRIV07_062
      • Version 7.1: VAXDRIV05_071
      • Version 7.2: VAXDRIV01_072
  • DECevent Version 2.9 or later must be used for error tracing. Earlier versions of DECevent do not support Fibre Channel.

7.2.3 Fibre Channel and Volume Shadowing for OpenVMS

Since Fibre Channel support was introduced in OpenVMS Alpha, shadowing of directly connected Fibre Channel storage using Volume Shadowing for OpenVMS, has been available. OpenVMS Alpha Version 7.2-1 extended this support to the shadowing of Fibre Channel multipath devices.

7.2.4 Fibre Channel and OpenVMS Galaxy Configurations

Fibre Channel is supported in all OpenVMS Galaxy configurations. For more information about Galaxy configurations, see the OpenVMS Alpha Partitioning and Galaxy Guide.

7.3 Example Configurations

This section presents example Fibre Channel configurations. The configurations build on each other, starting with the smallest valid configuration and adding redundant components for increasing levels of availability, performance, and scalability.

7.3.1 Single Host with Dual-Ported Storage

Figure 7-3 shows a single system using Fibre Channel as a storage interconnect.

Figure 7-3 Single Host With One Dual-Ported Storage Controller


Note the following about this configuration:

  • Dual ports of the HSG storage controller increase the availability and performance of the storage subsystem.
  • Extra ports on the switch enable system growth.
  • To maximize performance, logical units can be spread over the two HSG ports.
  • The switch and the HSG are single points of failure. To provide higher availability, Volume Shadowing for OpenVMS can be used to replicate the data to another Fibre Channel switch and HSG controller.

7.3.2 Multiple Hosts With One Dual-Ported Storage Controller

Figure 7-4 shows multiple hosts connected to a dual-ported storage subsystem.

Figure 7-4 Multiple Hosts With One Dual-Ported Storage Controller


Note the following about this configuration:

  • Multiple hosts increase availability of the entire system.
  • Extra ports on the switch enable system growth.
  • The switch and the HSG are single points of failure. To provide higher availability, Volume Shadowing for OpenVMS can be used to replicate the data to another Fibre Channel switch and HSG controller.

7.3.3 Multiple Hosts With Storage Controller Redundancy

Figure 7-5 shows multiple hosts connected to two dual-ported storage controllers.

Figure 7-5 Multiple Hosts With Storage Controller Redundancy


This configuration offers the following advantages:

  • Logical units can be spread over the four HSG ports, offering higher performance.
  • HSGs can be configured in multibus failover mode, even though there is just one Fibre Channel "bus."
  • The switch is still a single point of failure. To provide higher availability, Volume Shadowing for OpenVMS can be used to replicate the data to another Fibre Channel switch and HSG controller.

7.3.4 Multiple Hosts With Multiple Independent Switches

Figure 7-6 shows multiple hosts connected to two switches, each of which is connected to a pair of dual-ported storage controllers.

Figure 7-6 Multiple Hosts With Multiple Independent Switches


This two-switch configuration offers the advantages of the previous configurations plus the following:

  • Higher level of availability afforded by two switches. There is no single point of failure.
  • Better performance because of the additional host bus adapter.
  • Each host has multiple independent paths to a storage subsystem. The two switches are not connected to each other to ensure that the paths are completely independent.

7.3.5 Multiple Hosts With Dual Fabrics

Figure 7-7 shows multiple hosts connected to two fabrics; each fabric consists of two switches.

Figure 7-7 Multiple Hosts With Dual Fabrics


This dual-fabric configuration offers the advantages of the previous configurations plus the following advantages:

  • More ports are available per fabric for connecting to additional hosts and storage subsystems.
  • Each host has four host bus adapters, one for each switch. Only two adapters are required, one per fabric. The additional adapters increase availability and performance.

7.3.6 Multiple Hosts With Larger Fabrics

The configurations shown in this section offer even higher levels of performance and scalability.

Figure 7-8 shows multiple hosts connected to two fabrics. Each fabric has four switches.

Figure 7-8 Multiple Hosts With Larger Dual Fabrics


Figure 7-9 shows multiple hosts connected to four fabrics. Each fabric has four switches.

Figure 7-9 Multiple Hosts With Four Fabrics


7.4 Fibre Channel Addresses, WWIDs, and Device Names

Fibre Channel devices for disk and tape storage come with factory-assigned worldwide IDs (WWIDs). These WWIDs are used by the system for automatic FC address assignment. The FC WWIDs and addresses also provide the means for the system manager to identify and locate devices in the FC configuration. The FC WWIDs and adresses are displayed, for example, by the Alpha console and by the HSG console. It is necessary, therefore, for the system manager to understand the meaning of these identifiers and how they relate to OpenVMS device names.

7.4.1 Fibre Channel Addresses and WWIDs

In most situations, Fibre Channel devices are configured to have temporary addresses. The device's address is assigned automatically each time the interconnect initializes. The device may receive a new address each time a Fibre Channel is reconfigured and reinitialized. This is done so that Fibre Channel devices do not require the use of address jumpers. There is one Fibre Channel address per port, as shown in Figure 7-10.

Figure 7-10 Fibre Channel Host and Port Addresses


In order to provide more permanent identification, each port on each device has a WWID, which is assigned at the factory. Every Fibre Channel WWID is unique. Fibre Channel also has node WWIDs to identify multiported devices. WWIDs are used by the system to detect and recover from automatic address changes. They are useful to system managers for identifying and locating physical devices.

Figure 7-11 shows Fibre Channel components with their factory-assigned WWIDs and their Fibre Channel addresses.

Figure 7-11 Fibre Channel Host and Port WWIDs and Addresses


Note the following about this figure:

  • Node name and port name of the host adapter are identical.
    (This is true in the current implementation.)
  • Host adapter's port and node name is a 64-bit, factory-assigned WWID.
  • Host adapter's address is a 24-bit automatic, transient assignment.
  • Each HSG storage port has a 64-bit, factory-assigned WWID, and a 24-bit transient address that is automatically assigned.
  • HSG controller pair share a node name that is a 64-bit, factory-assigned WWID.

7.4.2 OpenVMS Names for Fibre Channel Devices

There is an OpenVMS name for each Fibre Channel storage adapter, for each path from the storage adapter to the storage subsystem, and for each storage device. These sections apply to both disk devices and tape devices, except for Section 7.4.2.3, which is specific to disk devices. Tape device names are described in Section 7.5.

7.4.2.1 Fibre Channel Storage Adapter Names

Fibre Channel storage adapter names, which are automatically assigned by OpenVMS, take the form FGx0 :

  • FG represents Fibre Channel.
  • x represents the unit letter, from A to Z.
  • 0 is a constant.

The naming design places a limit of 26 adapters per system. This naming may be modified in future releases to support a larger number of adapters.

Fibre Channel adapters can run multiple protocols, such as SCSI and LAN. Each protocol is a pseudodevice associated with the adapter. For the initial implementation, just the SCSI protocol is supported. The SCSI pseudodevice name is PGx0 , where x represents the same unit letter as the associated FGx0 adapter.

These names are illustrated in Figure 7-12.

Figure 7-12 Fibre Channel Initiator and Target Names


7.4.2.2 Fibre Channel Path Names

With the introduction of multipath SCSI support, as described in Chapter 6, it is necessary to identify specific paths from the host to the storage subsystem. This is done by concatenating the SCSI pseudodevice name, a decimal point (.), and the WWID of the storage subsystem port that is being accessed. For example, the Fibre Channel path shown in Figure 7-12 is named PGB0.4000-1FE1-0000-0D04.

Refer to Chapter 6 for more information on the display and use of the Fibre Channel path name.

7.4.2.3 Fibre Channel Disk Device Identification

The four identifiers associated with each FC disk device are shown in Figure 7-13.

Figure 7-13 Fibre Channel Disk Device Naming


The logical unit number (LUN) is used by the system as the address of a specific device within the storage subsystem. This number is set and displayed from the HSG console by the system manager. It can also be displayed by the OpenVMS SDA utility.

Each Fibre Channel disk device also has a WWID to provide permanent, unique identification of the device. The HSG device WWID is 128 bits. Half of this identifier is the WWID of the HSG that created the logical storage device, and the other half is specific to the logical device. The device WWID is displayed by the HSG console and the AlphaServer console.

The third identifier associated with the storage device is a user-assigned device identifier. A device identifier has the following attributes:

  • User assigned at the HSG console.
  • User must ensure it is cluster unique.
  • Moves with the device.
  • Can be any decimal number from 0 to 32766, except for MSCP served devices.
    If the FC disk device is MSCP served, the device identifier is limited to 9999. This restriction will be removed in a future release.

The device identifier has a value of 567 in Figure 7-13. This value is used by OpenVMS to form the device name so it must be unique throughout the cluster. (It may be convenient to set the device identifier to the same value as the logical unit number (LUN). This is permitted as long as the device identifier is unique throughout the cluster.)

A Fibre Channel storage device name is formed by the operating system from the constant $1$DGA and a device identifier, nnnnn . The only variable part of the name is its device identifier, which you assign at the HSG console. Figure 7-13 shows a storage device that is known to the host as $1$DGA567 .

7.5 Fibre Channel Tape Support (Alpha)

This section describes the configuration requirements and user commands necessary to utilize the Fibre Channel tape functionality. Fibre Channel tape functionality refers to the support of SCSI tapes and SCSI tape libraries in an OpenVMS Cluster system with shared Fibre Channel storage. The SCSI tapes and libraries are connected to the Fibre Channel by a Fibre-to-SCSI bridge known as the Modular Data Router (MDR).

7.5.1 Minimum Hardware Configuration

Following is the minimum Fibre Channel tape hardware configuration:

  • Alpha system with KGPSA adapter
  • Compaq Modular Data Router (MDR), minimum firmware revision 1170
  • Fibre Channel Switch
  • Tape library, for example:
    • ESL9000 series
    • TL891
    • TL895
  • Individual tapes, for example:
    • TZ89
    • DLT8000

Note

The MDR must be connected to a switch and not directly to an Alpha system.

The MDR must be in SCSI Command Controller (SCC) mode, which is normally the default. If the MDR is not in SCC mode, use the command SetSCCmode On at the MDR console.

Tapes are not supported in an HSGxx storage subsystem nor behind an FCTC-II (Fibre Channel Tape Controller II).

Tape devices and tape library robots must not be set to SCSI target ID 7, since that ID is reserved for use by the MDR. A tape library robot is an example of a medium changer device, the term that is used throughout this section.

7.5.2 Overview of Fibre Channel Tape Device Naming

This section provides detailed background information about Fibre Channel Tape device naming.

Tape and medium changer devices are automatically named and configured using the SYSMAN IO FIND and IO AUTOCONFIGURE commands described in Section 7.5.3. System managers who configure tapes on Fibre Channel should refer directly to this section for the tape configuration procedure.

7.5.2.1 Tape and Medium Changer Device Names

Fibre Channel tapes and medium changers are named using a scheme similar to Fibre Channel disk naming.

On parallel SCSI, the device name of a directly attached tape implies the physical location of the device; for example, MKB301 resides on bus B, SCSI target ID 3, and LUN 1. Such a naming scheme does not scale well for Fibre Channel configurations, in which the number of targets or nodes can be very large.

Fibre Channel tape names are in the form $2$MGAn. The letter for the controller is always A, and the prefix is $2$. The device mnemonic is MG for tapes and GG for medium changers. The device unit n is automatically generated by OpenVMS.

The name creation algorithm chooses the first free unit number, starting with zero. The first tape discovered on the Fibre Channel is named $2$MGA0, the next is named $2$MGA1, and so forth. Similarly, the first medium changer detected on the Fibre Channel is named $2$GGA0. The naming of tapes and medium changers on parallel SCSI buses remains the same.

Note the use of allocation class 2. Allocation class 1 is already used by devices whose name is keyed by a user-defined identifier (UDID), as with HSG Fibre Channel disks ($1$DGAnnnn) and HSG console command LUNs ($1$GGAnnnn).

An allocation class of 2 is used by devices whose names are obtained from the file, SYS$DEVICES.DAT. The names are based on a worldwide identifier (WWID) key, as described in the following sections. Also note that, while GG is the same mnemonic used for both medium changers and HSG Command Console LUNs (CCLs), medium changers always have an allocation class of 2 and HSG CCLs an allocation class of 1.

Tape and medium changer names are automatically kept consistent within a single OpenVMS Cluster system. Once a tape device is named by any node in the cluster, all other nodes in the cluster automatically choose the same name for that device, even if this overrides the first free unit number algorithm. The chosen device name remains the same through all subsequent reboot operations in the cluster.

If multiple nonclustered Alpha systems exist on a SAN and need to access the same tape device on the Fibre Channel, then the upper-level application must provide consistent naming and synchronized access.

7.5.2.2 Use of Worldwide Identifiers (WWIDs)

For each Fibre Channel tape device name, OpenVMS must uniquely identify the physical device that is associated with that name.

In parallel SCSI, directly attached devices are uniquely identified by their physical path (port/target/LUN). Fibre Channel disks are uniquely identified by user-defined identifiers (UDIDs). These strategies are either unscalable or unavailable for Fibre Channel tapes and medium changers.

Therefore, the identifier for a given Fibre Channel tape or medium changer device is its worldwide identifier (WWID). The WWID resides in the device firmware and is required to be unique by the Fibre Channel standards.

WWIDs can take several forms, for example:

  • IEEE registered WWID (64-bit binary)
  • Vendor ID plus product ID plus serial number (ASCII)

The overall WWID consists of the WWID data prefixed by a binary WWID header, which is a longword describing the length and type of WWID data.

In general, if a device reports an IEEE WWID, OpenVMS chooses this as the unique identifying WWID for the device. If the device does not report such a WWID, then the ASCII WWID is used. If the device reports neither an IEEE WWID nor serial number information, then OpenVMS does not configure the device. During the device discovery process, OpenVMS rejects the device with the following message:


%SYSMAN-E-NOWWID, error for device Product-ID, no valid WWID found.

The WWID structures can be a mix of binary and ASCII data. These formats are displayable and are intended to be consistent with those defined by the console WWIDMGR utility. Refer to the WWIDMGR User's Manual for additional information. Note that if the data following the WWID header is pure ASCII data, it must be enclosed in double quotation marks.

The displayable format of a 64-bit IEEE WWID consists of an 8-digit hexadecimal number in ASCII (the WWID header), followed by a colon (:) and then the IEEE WWID data. For example:


0C000008:0800-4606-8010-CD3C

The displayable format of an ASCII WWID consists of an 8-digit WWID header, followed by a colon (:) and then the concatenation of the 8-byte vendor ID plus the 16-byte product ID plus the serial number. For example:


04100022:"COMPAQ  DLT8000         JF71209240"

Note

Occasionally, an ASCII WWID may contain nonprintable characters in the serial number. In a displayable format, such a character is represented by \nn, where nn is the 2-digit ASCII hexidecimal value of the character. For example, a null is represented by \00.

7.5.2.3 File-Based Device Naming

Fibre Channel tape and medium changer devices are configured according to information found in the SYS$SYSTEM:SYS$DEVICES.DAT file. This is an ASCII file consisting of two consecutive records per device, where the two records are in the following form:


[Device $2$devnam]
WWID = displayable_identifier

During autoconfiguration, the Fibre Channel is probed and the WWIDs are fetched for all devices. If the fetched WWID matches an entry in the memory-resident copy of the SYS$DEVICES.DAT file, then the device is configured using the device name that has been paired with that WWID.

Note

The SYS$DEVICES.DAT file is also used for port allocation class (PAC) information. In OpenVMS Alpha Version 7.3, Fibre Channel tape-naming becomes a second use of this same file, even though PACs and Fibre Channel tapes are not related, other than their common need to access file-based device information at boot time.

By default, the SYS$DEVICES.DAT file is created in the cluster common directory, SYS$COMMON:[SYSEXE].

As an example, the following portion of SYS$DEVICES.DAT causes the eventual configuration of devices named $2$MGA300 and $2$MGA23:


!
[Device $2$MGA300]
WWID = 04100022:"COMPAQ  DLT8000         JF71209240"
!
[Device $2$mga23]
WWID = 04100022:"DEC     TZ89     (C) DECJL01164302"

Although the file is typically read and written only by OpenVMS utilities, in rare instances you may need to edit the file. In OpenVMS Alpha Version 7.3, you can change only the unit number of the device, as described in Section 7.5.5. The internal syntax rules governing the file are summarized as follows:

  • Comment lines (beginning with !) and blank lines are permitted.
  • Any white space (or none) can separate [Device from the device name represented by $2$xxx ].
  • Failure to supply the $2$ prefix will result in a console warning.

Similarly, on the line containing WWID = , any white space (or none) can appear on either side of the equals sign. All lines must be left-justified, and all lines must be less than 512 characters.

The parsing of this file is not case sensitive, with one important exception: all characters enclosed within double quotation marks are taken literally, so that characters such as spaces and lowercase letters are significant. In the case of ASCII data enclosed by double quotation marks, there must be no space between the colon and the double quotation mark.

Also, if more than one WWID = line follows a single [Device devnam] line, the last WWID = value takes precedence. Normally, however, there is exactly one WWID = line per [Device devnam] line.

Similarly, if two or more [Device devnam] lines specify the same device name but different WWIDs, only the last device name and WWID specified in the file is used.

This file is read at boot time, and it is also read from and written to by the SYSMAN IO FIND_WWID command. If there are additional system-specific copies of the SYS$DEVICES.DAT file, their tape naming records become automatically compatible as a result of running SYSMAN IO FIND_WWID on each system. The SYSMAN IO FIND_WWID command is described in more detail in the following section.


Previous Next Contents Index