Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

4.8.5 SCSI Interconnect Distances

The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and, for single-ended signaling, by the data transfer rate.

There are two types of electrical signaling for SCSI interconnects: single ended and differential. Both types can operate in standard mode, fast mode, or ultra mode. For differential signaling, the maximum SCSI cable length possible is the same for standard mode and fast mode.

Table 4-4 summarizes how the type of signaling method affects SCSI interconnect distances.

Table 4-4 Maximum SCSI Interconnect Distances
Signaling Technique Rate of Data Transfer Maximum Cable Length
Single ended Standard 6 m 1
Single ended Fast 3 m
Single ended Ultra 20.5 m 2
Differential Standard or Fast 25 m
Differential Ultra 25.5 m 3


1The SCSI standard specifies a maximum length of 6 m for this interconnect. However, it is advisable, where possible, to limit the cable length to 4 m to ensure the highest level of data integrity.
2This length is attainable if devices are attached only at each end. If devices are spaced along the interconnect, they must be at least 1 m apart, and the interconnect cannot exceed 4 m.
3More than two devices can be supported.

4.8.6 Supported Adapters, Bus Types, and Computers

Table 4-5 shows SCSI adapters with the internal buses and computers they support.

Table 4-5 SCSI Adapters
Adapter Internal Bus Supported
Computers
Embedded (NCR-810 based)/KZPAA 1 PCI See the options specifications for your system.
KZPSA 2 PCI Supported on all Alpha computers that support KZPSA in single-host configurations. 3
KZTSA 2 TURBOchannel DEC 3000
KZPBA-CB 4 PCI Supported on all Alpha computers that support KZPBA in single-host configurations. 3


1Single-ended.
2Fast-wide differential (FWD).
3See the system-specific hardware manual.
4Ultra differential. The ultra single-ended adapter (KZPBA-CA) does not support multihost systems.

Reference: For information about the SCSI adapters supported on each OpenVMS Integrity server or Alpha system, go to the OpenVMS web page at:

http://www.hp.com/go/openvms

Select Integrity server or Alpha system (from the left navigation panel under related links). Then select the system of interest and its QuickSpecs. The QuickSpecs for each system briefly describe all options, including the adapters, supported on that system.

Reference: For information about the SCSI adapters supported on each AlphaServer system, go to the OpenVMS web page at:

http://www.hp.com/go/openvms

Select AlphaSystems (from the left navigation panel under related links). Next, choose the AlphaServer system of interest and then its QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe all options, including the adapters, supported on that system.

4.9 SAS Interconnect (Integrity servers Only)

SAS is a point-to-point architecture that transfers data to and from SCSI storage devices by using serial communication (one bit at a time). SAS uses the SAS devices and differential signaling method to achieve reliable, high-speed serial communication.

SAS combines high-end features from fiber channel (such as multi-initiator support and full duplex communication) and the physical interface leveraged from SATA (for better compatibility and investment protection), with the performance, reliability and ease of use of traditional SCSI technology.

SAS Devices

There are three types of SAS devices: initiators, targets, and expanders. An initiator device is a HBA, or controller. The initiator is attached to one or more targets - SAS hard disk drives, SATA hard disk drives, and SAS tape drives - to form a SAS domain. Expanders are low-cost, high-speed switches that scale the number of targets attached to an initiator, thereby creating a larger SAS domain. Each SAS device has a unique worldwide name (SAS address) assigned at manufacturing to simplify its identification in a domain

Differential signaling

All SAS devices have connection points called ports. One or more transceiver mechanisms, called phys, are located in the port of each SAS device. A physical link, consisting of two wire pairs, connects the transmitter of each phy in one device's port to the receiver of a phy in another device's port. The SAS interface allows the combination of multiple physical links to create two (2x), 3x, 4x, or 8x connections per port for scalable bandwidth. A port that has one phy is described as "narrow" while a port with two to four phys is described as "wide"

SAS uses differential signaling to transfer data over a physical link, which reduces the effects of capacitance, inductance, and noise experienced by parallel SCSI at higher speeds. SAS communication is full duplex, which means that each phy can send and receive information simultaneously over the two wire pairs.

4.9.1 Advantages

A multi-node cluster using SAS provides an alternative to clustered Fibre Channel local loop topologies. This highly scalable SAS architecture enables topologies that provide high performance and high availability with no single point of failure.

SAS solutions accommodate both low cost bulk storage (SATA) or performance and reliability for mission critical applications (SAS) allowing for maximum configuration flexibility and simplicity.

4.9.2 Throughput

SAS is designed to work with speeds greater than Parallel SCSI, that is, greater than 320 Mb/s. Table 4-6 shows the throughput for the SAS interconnects. For SCSI interconnect throughput, see Table 4-3.

Table 4-6 Maximum Data Transfer Rates in Megabytes per Second
Mode Speed in Megabytes per Second
SAS 1 (also called 3Gig SAS) 300 MB/s or 3 Gb/s
SAS 2 (also called 6Gig SAS) 600MB/s or 6Gb/s

4.9.3 Supported Adapters, Bus Types, and Computers

Table 4-7shows SAS adapters with the internal buses and computers they support.

Table 4-7 SAS Adapters
Adapter Internal Bus Supported
Computers
8p SAS HBA PCI-X Core I/O on Integrity servers rx3600, rx6600
HP SC44Ge Host Bus Adapter PCIe Supported on Integrity servers with PCIe backplane (rx2660, rx3600, rx6600)

4.10 LAN Interconnects

Ethernet (including Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet) are LAN-based interconnects.

These interconnects provide the following features:

The LANs that are supported as OpenVMS Cluster interconnects on each OpenVMS platform (Integrity servers and Alpha) are shown in Table 4-8.

Table 4-8 LAN Interconnect Support for OpenVMS Clusters
LAN Type Platform
Ethernet Integrity servers, Alpha
Fast Ethernet Integrity servers, Alpha
Gigabit Ethernet Integrity servers, Alpha
10 Gigabit Ethernet Integrity servers only

Following the discussion of multiple LAN adapters, information specific to supported LAN interconnect, Ethernet, is provided.

4.10.1 Multiple LAN Adapters

Multiple LAN adapters are supported. The adapters can be for different LAN types or for different adapter models for the same LAN type.

Multiple LAN adapters can be used to provide the following:

4.10.1.1 Multiple LAN Path Load Distribution

When multiple node-to-node LAN paths are available, the OpenVMS Cluster software chooses the set of paths to use based on the following criteria, which are evaluated in strict precedence order:

  1. Recent history of packet loss on the path
    Paths that have recently been losing packets at a high rate are termed lossy and will be excluded from consideration. Channels that have an acceptable loss history are termed tight and will be further considered for use.
  2. Priority
    Management priority values can be assigned to both individual LAN paths and to local LAN devices. A LAN path's priority value is the sum of these priorities. Only tight LAN paths with a priority value equal to, or one less than, the highest priority value of any tight path will be further considered for use.
  3. Maximum packet size
    Tight, equivalent-priority channels whose maximum packet size is equivalent to that of the largest packet size of any tight equivalent-priority channel will be further considered for use.
  4. Equivalent latency
    LAN paths that meet the preceding criteria will be used if their latencies (computed network delay) are closely matched to that of the fastest such channel. The delay of each LAN path is measured using cluster communications traffic on that path. If a LAN path is excluded from cluster communications use because it does not meet the preceding criteria, its delay will be measured at intervals of a few seconds to determine if its delay, or packet loss rate, has improved enough so that it then meets the preceding criteria.

Packet transmissions are distributed in round-robin fashion across all communication paths between local and remote adapters that meet the preceding criteria.

4.10.1.2 Increased LAN Path Availability

Because LANs are ideal for spanning great distances, you may want to supplement an intersite link's throughput with high availability. You can do this by configuring critical nodes with multiple LAN adapters, each connected to a different intersite LAN link.

A common cause of intersite link failure is mechanical destruction of the intersite link. This can be avoided by path diversity, that is, physically separating the paths of the multiple intersite links. Path diversity helps to ensure that the configuration is unlikely to be affected by disasters affecting an intersite link.

4.10.2 Configuration Guidelines for LAN-Based Clusters

The following guidelines apply to all LAN-based OpenVMS Cluster systems:

4.10.3 Ethernet (10/100) and Gigabit (1/10) Ethernet Advantages

The Ethernet (10/100) interconnect is typically the lowest cost of all OpenVMS Cluster interconnects.

Gigabit (1/10) Ethernet interconnects offer the following advantages in addition to the advantages listed in Section 4.10:

4.10.4 Ethernet (10/100) and Gigabit (1/10) Ethernet Throughput

The Ethernet technology offers a range of baseband transmission speeds:

Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI.

Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.

Reference: For LAN configuration guidelines, see Section 4.10.2.

4.10.5 Configuration Guidelines for 10 Gigabit Ethernet Clusters

Use the following guidelines when configuring systems in a 10 Gigabit Ethernet cluster:

4.11 Cluster over IP

OpenVMS cluster can also use Internet Protocol for cluster communication. The basic cluster rule is that all nodes in a cluster must be able to communicate with all other nodes in a cluster by means of direct communication. The nodes in a cluster cab situated in a same LAN in a data centre or the nodes can be distributed geographically apart.

If the nodes are within the same LAN, LAN is preferred for cluster communication. When nodes are located in multiple sites or multiple LANs, IP is preferred for cluster communication.

In a scenario, where Layer 2 service is not available or if the service is expensive, cluster communication between two different sites can use cluster over IP.

Note

It is also possible to create an extended LAN or VLAN between two sites and use LAN for cluster communication between the nodes in two different sites.

Cluster protocol (SCS aka SCA, System Communication Architecture) over LAN is provided by Port Emulator driver (PEDRIVER), the PEDRIVER also provides SCS communication using TCP/IP in addition to LAN for cluster communication as shown in Figure 1-1. PEDRIVER uses UDP to transport SCS packets between two different nodes.

Cluster over IP provides the following features:

4.11.1 Configuration Guidelines

The following guidelines apply to all IP based OpenVMS Clusters:

4.11.2 IP Availability

Logical LAN failover can be used and configured with IP addresses for cluster communication. This logical LAN failover feature provides high availability in case of any link failure drops. For more information about logical LAN failover, see Section 8.7.

4.11.3 IP Advantages

Cluster over IP provides the following advantages:

4.11.4 IP Performance

A key challenge is to have comparable performance levels when using IP for cluster traffic. For long distance cluster the speed-of-light delay when dealing with geographically distant sites quickly becomes the dominant factor for latency, overshadowing any delays associated with traversing the IP stacks within the cluster member hosts. There may be a tradeoff between the latency of failover and steady-state performance. Localization of cluster traffic in the normal (non-failover) case as vital to optimizing system performance as the distance between sites is stretched to supported limits is considered.


Previous Next Contents Index