Previous | Contents | Index |
This chapter provides information to help you select systems for your
OpenVMS Cluster to satisfy your business and application requirements.
3.1 Integrity servers and Alpha Systems
An OpenVMS cluster can include systems running OpenVMS Integrity servers or a combination of systems running OpenVMS Integrity servers and OpenVMS Alpha. See the OpenVMS Software Product Description for a listing of the models currently supported.
HP Integrity server systems span a range of computing environments, including:
Your choice of systems depends on your business, your application
needs, and your budget. With a high-level understanding of systems and
their characteristics, you can make better choices. See the Software
Product Description or visit
http://www.hp.com/go/openvms
for the complete list of supported Integrity server systems.
3.4 Availability Considerations
An OpenVMS Cluster system is a highly integrated environment in which multiple systems share access to resources. This resource sharing increases the availability of services and data. OpenVMS Cluster systems also offer failover mechanisms that are transparent and automatic, and require little intervention by the system manager or the user.
Reference: See Chapter 8 for more information about
these failover mechanisms and about availability.
3.5 System Specifications
The HP web site provides ordering and configuring information for workstations and servers. It also contains detailed information about storage devices, printers, and network application support.
To access the HP web site, visit:
An interconnect is a physical path that connects computers to other computers, and to storage subsystems. OpenVMS Cluster systems support a variety of interconnects (also referred to as buses) so that members can communicate with each other and with storage, using the most appropriate and effective method available.
The software that enables OpenVMS Cluster systems to communicate over an interconnect is the System Communications Services (SCS). An interconnect that supports node-to-node SCS communications is called a cluster interconnect. An interconnect that provides node-to-storage connectivity within a cluster is called a shared-storage interconnect.
OpenVMS supports the following types of interconnects:
Cluster over IP is supported on Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet. |
The CI, DSSI, and FDDI interconnects are supported on Alpha and VAX systems. Memory Channel and ATM interconnects are supported only on Alpha systems. For documentation related to these interconnects, see the previous version of the manual. |
The interconnects described in this chapter share some general characteristics. Table 4-1 describes these characteristics.
Characteristic | Description |
---|---|
Throughput |
The quantity of data transferred across the interconnect.
Some interconnects require more processor overhead than others. For example, Ethernet and FDDI interconnects require more processor overhead than do CI or DSSI. Larger packet sizes allow higher data-transfer rates (throughput) than do smaller packet sizes. |
Cable length | Interconnects range in length from 3 m to 40 km. |
Maximum number of nodes | The number of nodes that can connect to an interconnect varies among interconnect types. Be sure to consider this when configuring your OpenVMS Cluster system. |
Supported systems and storage | Each OpenVMS Cluster node and storage subsystem requires an adapter to connect the internal system bus to the interconnect. First consider the storage and processor I/O performance, then the adapter performance, when choosing an interconnect type. |
4.2 Comparison of Interconnect Types
Table 4-2 shows key statistics for a variety of interconnects.
Interconnect | Maximum Throughput (Mb/s) | Hardware-Assisted Data Link1 | Storage Connection | Topology | Maximum Nodes per Cluster | Maximum Length |
---|---|---|---|---|---|---|
General-purpose | ||||||
Ethernet
Fast Gigabit 10 Gigabit |
10/100/1000 | No | MSCP served | Linear or radial to a hub or switch | 96 2 |
100 m
4/
100 m 4/ 550 m 3 |
Shared-storage only | ||||||
Fibre Channel | 1000 | No | Direct 5 | Radial to a switch | 96 2 |
10 km
6
/100 km 7 |
SCSI | 160 | No | Direct 5 | Bus or radial to a hub | 8-12 8 | 25 m |
SAS | 6000 | No | Direct | Point to Point, Radial to a switch | 96 2 | 6 m |
You can use multiple interconnects to achieve the following benefits:
You can use two or more different types of interconnects in an OpenVMS Cluster system. You can use different types of interconnects to combine the advantages of each type and to expand your OpenVMS Cluster system.
If any one node in a cluster requires IP for cluster communication, all the other members in the cluster must be enabled for IP cluster communication. |
For the latest information on supported interconnects, see the most recent OpenVMS Cluster Systems SPD.
Reference: For detailed information about the interconnects and adapters supported on each Integrity server system and AlphaServer system, visit the OpenVMS web page at:
Select HP Integrity servers (from the left navigation panel under related links). Then select the Integrity system of interest and its QuickSpecs. The QuickSpecs for each system briefly describe all options, including the adapters, supported on that system.
Select HP AlphaSystems (from the left navigation panel under related
links). Then select the AlphaServer system of interest and its
QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe
all options, including the adapters, supported on that system.
4.6 Fibre Channel Interconnect
Fibre Channel is a high-performance ANSI standard network and storage
interconnect for PCI-based Alpha systems. It is a full-duplex serial
interconnect and can simultaneously transmit and receive over 100
megabytes per second. Fibre Channel supports simultaneous access of
SCSI storage by multiple nodes connected to a Fibre Channel switch. A
second type of interconnect is needed for node-to-node communications.
4.6.1 Advantages
The Fibre Channel interconnect offers the following advantages:
The Fibre Channel interconnect transmits up to 2 Gb/s, 4 Gb/s, 8 Gb/s
(depending on adapter). It is a full-duplex serial interconnect that
can simultaneously transmit and receive over 100 MB/s.
4.7 MEMORY CHANNEL Interconnect (Alpha Only)
MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of OpenVMS Clusters to work as a single, virtual system.
Three hardware components are required by a node to support a MEMORY CHANNEL connection:
A MEMORY CHANNEL hub is a PC size unit that provides a connection among systems. MEMORY CHANNEL can support up to four Alpha nodes per hub. You can configure systems with two MEMORY CHANNEL adapters in order to provide failover in case an adapter fails. Each adapter must be connected to a different hub.
A MEMORY CHANNEL hub is not required in clusters that comprise only two
nodes. In a two-node configuration, one PCI adapter is configured,
using module jumpers, as a virtual hub.
4.7.1 Advantages
MEMORY CHANNEL technology provides the following features:
The MEMORY CHANNEL interconnect has a very high maximum throughput of
100 MB/s. If a single MEMORY CHANNEL is not sufficient, up to two
interconnects (and two MEMORY CHANNEL hubs) can share throughput.
4.7.3 Supported Adapter
The MEMORY CHANNEL adapter connects to the PCI bus. The MEMORY CHANNEL adapter, CCMAA--BA, provides improved performance over the earlier adapter.
Reference: For information about the CCMAA-BA adapter support on AlphaServer systems, go to the OpenVMS web page at:
Select AlphaSystems (from the left navigation panel under related
links). Next, select the AlphaServer system of interest and then its
QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe
all options, including the adapters, supported on that system.
4.8 SCSI Interconnect
The SCSI interconnect is an industry standard interconnect that supports one or more computers, peripheral devices, and interconnecting components. SCSI is a single-path, daisy-chained, multidrop bus. It is a single 8-bit or 16-bit data path with byte parity for error detection. Both inexpensive single-ended and differential signaling for longer distances are available.
In an OpenVMS Cluster, multiple computers on a single SCSI interconnect can simultaneously access SCSI disks. This type of configuration is called multihost SCSI connectivity or shared SCSI storage and is restricted to certain adapters and limited configurations. A second type of interconnect is required for node-to-node communication.
Shared SCSI storage in an OpenVMS Cluster system enables computers
connected to a single SCSI bus to share access to SCSI storage devices
directly. This capability makes it possible to build highly available
servers using shared access to SCSI storage.
4.8.1 OpenVMS Alpha Configurations
For multihost access to SCSI storage, the following components are required:
For larger configurations, the following components are available:
This support is restricted to certain adapters. OpenVMS does not provide this support for the newest SCSI adapters, including the Ultra SCSI adapters KZPEA, KZPDC, A6828A, A6829A, and A7173A. |
Reference: For a detailed description of how to
connect OpenVMS Alpha SCSI configurations, see Appendix A.
4.8.2 OpenVMS Integrity servers Two-Node Shared SCSI Configuration
Shared SCSI storage support for two-node OpenVMS Integrity servers Cluster systems was introduced in OpenVMS Version 8.2-1. Prior to this release, shared SCSI storage was supported on OpenVMS Alpha systems only, using an earlier SCSI host-based adapter (HBA).
Shared SCSI storage in an OpenVMS Integrity servers Cluster system is subject to the following restrictions:
Figure 4-1 illustrates two-node shared SCSI configuration. Note that a second interconnect, a LAN, is required for host-to-host OpenVMS Cluster communications. (OpenVMS Cluster communications are also known as System Communications Architecture (SCA) communications.)
Note, the SCSI IDs of 6 and 7 are required in this configuration. One of the systems must have a SCSI ID of 6 for each A7173A adapter port connected to a shared SCSI bus, instead of the factory-set default of 7. You use the U320_SCSI pscsi.efi utility, included on the IPF Offline Diagnostics and Utilities CD, to change the SCSI ID. The procedure for doing this is documented in the HP A7173A PCI-X Dual Channel Ultra320 SCSI Host Bus Adapter Installation Guide at:
http://docs.hp.com/en/netcom.html
Figure 4-1 Two-Node OpenVMS Integrity servers Cluster System
The SCSI interconnect offers the following advantages:
Table 4-3 show throughput for the SCSI interconnect.
Mode | Narrow (8-Bit) | Wide (16-Bit) |
---|---|---|
Standard | 5 | 10 |
Fast | 10 | 20 |
Ultra | 20 | 40 |
Previous | Next | Contents | Index |