HP OpenVMS Systems Documentation |
OpenVMS Cluster Systems
3.4.4 SatellitesSatellites are computers without a local system disk. Generally, satellites are consumers of cluster resources, although they can also provide facilities for disk serving, tape serving, and batch processing. If satellites are equipped with local disks, they can enhance performance by using such local disks for paging and swapping. Satellites are booted remotely from a boot server (or from a MOP server and a disk server) serving the system disk. Section 3.4.5 describes MOP and disk server functions during satellite booting. Note: An Alpha system disk can be mounted as a data disk on a VAX computer and, with proper MOP setup, can be used to boot Alpha satellites. Similarly, a VAX system disk can be mounted on an Alpha computer and, with the proper MOP setup, can be used to boot VAX satellites.
Reference: Cross-architecture booting is described in
Section 10.5.
When a satellite requests an operating system load, a MOP server for the appropriate OpenVMS Alpha or OpenVMS VAX operating system sends a bootstrap image to the satellite that allows the satellite to load the rest of the operating system from a disk server and join the cluster. The sequence of actions during booting is described in Table 3-1.
++Alpha specific +VAX specific 3.4.6 ExamplesFigure 3-3 shows an OpenVMS Cluster system based on a LAN interconnect with a single Alpha server node and a single Alpha system disk. Note: To include VAX satellites in this configuration, configure a VAX system disk on the Alpha server node following the instructions in Section 10.5. Figure 3-3 LAN OpenVMS Cluster System with Single Server Node and System Disk In Figure 3-3, the server node (and its system disk) is a single point of failure. If the server node fails, the satellite nodes cannot access any of the shared disks including the system disk. Note that some of the satellite nodes have locally connected disks. If you convert one or more of these into system disks, satellite nodes can boot from their own local system disk. Figure 3-4 shows an example of an OpenVMS Cluster system that uses LAN and Fibre Channel interconnects. Figure 3-4 LAN and Fibre Channel OpenVMS Cluster System: Sample Configuration The LAN connects nodes A and B with nodes C and D into a single OpenVMS Cluster system. In Figure 3-4, Volume Shadowing for OpenVMS is used to maintain key data storage devices in identical states (shadow sets A and B). Any data on the shadowed disks written at one site will also be written at the other site. However, the benefits of high data availability must be weighed against the performance overhead required to use the MSCP server to serve the shadow set over the cluster interconnect. Figure 3-5 illustrates how FDDI can be configured with Ethernet from the bridges to the server CPU nodes. This configuration can increase overall throughput. OpenVMS Cluster systems that have heavily utilized Ethernet segments can replace the Ethernet backbone with a faster LAN to alleviate the performance bottleneck that can be caused by the Ethernet. Figure 3-5 FDDI in Conjunction with Ethernet in an OpenVMS Cluster System Comments:
3.4.7 LAN Bridge Failover ProcessThe following table describes how the bridge parameter settings can affect the failover process.
Note: If you change a parameter on one LAN bridge, you
should change that parameter on all bridges to ensure that selection of
a new root bridge does not change the value of the parameter. The
actual parameter value the bridge uses is the value specified by the
root bridge.
MEMORY CHANNEL is a high-performance cluster interconnect technology
for PCI-based Alpha systems. With the benefits of very low latency,
high bandwidth, and direct memory access, MEMORY CHANNEL complements
and extends the ability of OpenVMS Clusters to work as a single,
virtual system. MEMORY CHANNEL is used for node-to-node cluster
communications only. You use it in combination with another
interconnect, such as Fibre Channel, SCSI, CI, or DSSI, that is
dedicated to storage traffic.
A node requires the following three hardware components to support a MEMORY CHANNEL connection:
3.5.2 ExamplesFigure 3-6 shows a two-node MEMORY CHANNEL cluster with shared access to Fibre Channel storage and a LAN interconnect for failover. Figure 3-6 Two-Node MEMORY CHANNEL OpenVMS Cluster Configuration A three-node MEMORY CHANNEL cluster connected by a MEMORY CHANNEL hub and also by a LAN interconnect is shown in Figure 3-7. The three nodes share access to the Fibre Channel storage. The LAN interconnect enables failover if the MEMORY CHANNEL interconnect fails. Figure 3-7 Three-Node MEMORY CHANNEL OpenVMS Cluster Configuration 3.6 Multihost SCSI OpenVMS Cluster SystemsOpenVMS Cluster systems support the Small Computer Systems Interface (SCSI) as a storage interconnect. A SCSI interconnect, also called a SCSI bus, is an industry-standard interconnect that supports one or more computers, peripheral devices, and interconnecting components.
Beginning with OpenVMS Alpha Version 6.2, multiple Alpha computers can
simultaneously access SCSI disks over a SCSI interconnect. Another
interconnect, for example, a local area network, is required for
host-to-host OpenVMS cluster communications.
Beginning with OpenVMS Alpha Version 6.2-1H3, OpenVMS Alpha supports up to three nodes on a shared SCSI bus as the storage interconnect. A quorum disk can be used on the SCSI bus to improve the availability of two-node configurations. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared SCSI storage devices. With the introduction of the SCSI hub DWZZH-05, four nodes can be supported in a SCSI multihost OpenVMS Cluster system. In order to support four nodes, the hub's fair arbitration feature must be enabled.
For a complete description of these configurations, see Guidelines for OpenVMS Cluster Configurations.
Figure 3-8 shows an OpenVMS Cluster configuration that uses a SCSI interconnect for shared access to SCSI devices. Note that another interconnect, a LAN in this example, is used for host-to-host communications. Figure 3-8 Three-Node OpenVMS Cluster Configuration Using a Shared SCSI Interconnect 3.7 Multihost Fibre Channel OpenVMS Cluster Systems
OpenVMS Cluster systems support FC interconnect as a storage
interconnect. Fibre Channel is an ANSI standard network and storage
interconnect that offers many advantages over other interconnects,
including high-speed transmission and long interconnect distances. A
second interconnect is required for node-to-node communications.
OpenVMS Alpha supports the Fibre Channel SAN configurations described in the latest Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide and in the Data Replication Manager (DRM) user documentation. This configuration support includes multiswitch Fibre Channel fabrics, up to 500 meters of multimode fiber, and up to 100 kilometers of single-mode fiber. In addition, DRM configurations provide long-distance intersite links (ISLs) through the use of the Open Systems Gateway and wave division multiplexors. OpenVMS supports sharing of the fabric and the HSG storage with non-OpenVMS systems. OpenVMS provides support for the number of hosts, switches, and storage controllers specified in the StorageWorks documentation. In general, the number of hosts and storage controllers is limited only by the number of available fabric connections. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared Fibre Channel storage devices. Multipath support is available for these configurations.
For a complete description of these configurations, see Guidelines for OpenVMS Cluster Configurations.
Figure 3-9 shows a multihost configuration with two independent Fibre Channel interconnects connecting the hosts to the storage subsystems. Note that another interconnect is used for node-to-node communications. Figure 3-9 Four-Node OpenVMS Cluster Configuration Using a Fibre Channel Interconnect
Chapter 4
|
Task | Section |
---|---|
Check all hardware connections to computer, interconnects, and devices. | Described in the appropriate hardware documentation. |
Verify that all microcode and hardware is set to the correct revision levels. | Contact your support representative. |
Install the OpenVMS operating system. | Section 4.2 |
Install all software licenses, including OpenVMS Cluster licenses. | Section 4.3 |
Install layered products. | Section 4.4 |
Configure and start LANCP or DECnet for satellite booting | Section 4.5 |
Only one OpenVMS operating system version can exist on a system disk. Therefore, when installing or upgrading the OpenVMS operating systems:
A system disk is one of the few resources that cannot be shared between Alpha and VAX systems. However, an Alpha system disk can be mounted as a data disk on a VAX computer and, with MOP configured appropriately, can be used to boot Alpha satellites. Similarly, a VAX system disk can be mounted on an Alpha computer and, with the appropriate MOP configuration, can be used to boot VAX satellites.
Reference: Cross-architecture booting is described in Section 10.5.
Once booted, Alpha and VAX processors can share access to data on any disk in the OpenVMS Cluster, including system disks. For example, an Alpha system can mount a VAX system disk as a data disk and a VAX system can mount an Alpha system disk as a data disk.
Note: An OpenVMS Cluster running both implementations
of DECnet requires a system disk for DECnet for OpenVMS (Phase IV) and
another system disk for DECnet--Plus (Phase V). For more information,
see the DECnet--Plus documentation.
4.2.2 Where to Install
You may want to set up common system disks according to these guidelines:
IF you want the cluster to have... | THEN perform the installation or upgrade... |
---|---|
One common system disk for all computer members | Once on the cluster common system disk. |
A combination of one or more common system disks and one or more local (individual) system disks |
Either:
|
Note: If your cluster includes multiple common system
disks, you must later coordinate system files to define the cluster
operating environment, as described in Chapter 5.
Reference: See Section 8.5 for information about creating a duplicate system disk. |
Example: If your OpenVMS Cluster consists of 10
computers, 4 of which boot from a common Alpha system disk, 2 of which
boot from a second common Alpha system disk, 2 of which boot from a
common VAX system disk, and 2 of which boot from their own local system
disk, you need to perform an installation five times.
4.2.3 Information Required
Table 4-1 table lists the questions that the OpenVMS operating system installation procedure prompts you with and describes how certain system parameters are affected by responses you provide. You will notice that two of the prompts vary, depending on whether the node is running DECnet. The table also provides an example of an installation procedure that is taking place on a node named JUPITR.
Important: Be sure you determine answers to the questions before you begin the installation.
Note about versions: Refer to the appropriate OpenVMS OpenVMS Release Notes document for the required version numbers of hardware and firmware. When mixing versions of the operating system in an OpenVMS Cluster, check the release notes for information about compatibility.
Reference: Refer to the appropriate OpenVMS upgrade and installation manual for complete installation instructions.
Prompt | Response | Parameter | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Will this node be a cluster member (Y/N)? |
|
VAXCLUSTER | ||||||||||||
What is the node's DECnet node name? | If the node is running DECnet, this prompt, the following prompt, and the SCSSYSTEMID prompt are displayed. Enter the DECnet node name or the DECnet--Plus node synonym (for example, JUPITR). If a node synonym is not defined, SCSNODE can be any name from 1 to 6 alphanumeric characters in length. The name cannot include dollar signs ($) or underscores (_). | SCSNODE | ||||||||||||
What is the node's DECnet node address? |
Enter the DECnet node address (for example, a valid address might be
2.211). If an address has not been assigned, enter 0 now and enter a
valid address when you start DECnet (discussed later in this chapter).
For DECnet--Plus, this question is asked when nodes are configured with a Phase IV compatible address. If a Phase IV compatible address is not configured, then the SCSSYSTEMID system parameter can be set to any value. |
SCSSYSTEMID | ||||||||||||
What is the node's SCS node name? | If the node is not running DECnet, this prompt and the following prompt are displayed in place of the two previous prompts. Enter a name of 1 to 6 alphanumeric characters that uniquely names this node. At least 1 character must be a letter. The name cannot include dollar signs ($) or underscores (_). | SCSNODE | ||||||||||||
What is the node's SCSSYSTEMID number? |
This number must be unique within this cluster. SCSSYSTEMID is the
low-order 32 bits of the 48-bit system identification number.
If the node is running DECnet for OpenVMS, calculate the value from the DECnet address using the following formula: SCSSYSTEMID = ( DECnet-area-number * 1024) + ( DECnet-node-number)
Example: If the DECnet address is 2.211, calculate the
value as follows:
|
SCSSYSTEMID | ||||||||||||
Will the Ethernet be used for cluster communications (Y/N)? 1 |
|
NISCS_LOAD_PEA0 | ||||||||||||
Enter this cluster's group number: | Enter a number in the range of 1 to 4095 or 61440 to 65535 (see Section 2.5). This value is stored in the CLUSTER_AUTHORIZE.DAT file in the SYS$COMMON:[SYSEXE] directory. | Not applicable | ||||||||||||
Enter this cluster's password: | Enter the cluster password. The password must be from 1 to 31 alphanumeric characters in length and can include dollar signs ($) and underscores (_) (see Section 2.5). This value is stored in scrambled form in the CLUSTER_AUTHORIZE.DAT file in the SYS$COMMON:[SYSEXE] directory. | Not applicable | ||||||||||||
Reenter this cluster's password for verification: | Reenter the password. | Not applicable | ||||||||||||
Will JUPITR be a disk server (Y/N)? |
|
MSCP_LOAD | ||||||||||||
Will JUPITR serve HSC or RF disks (Y/N)? |
|
MSCP_SERVE_ALL | ||||||||||||
Enter a value for JUPITR's ALLOCLASS parameter: 3 |
The value is dependent on the system configuration:
|
ALLOCLASS | ||||||||||||
Does this cluster contain a quorum disk [N]? | Enter Y or N, depending on your configuration. If you enter Y, the procedure prompts for the name of the quorum disk. Enter the device name of the quorum disk. (Quorum disks are discussed in Chapter 2.) | DISK_QUORUM |
Previous | Next | Contents | Index |