HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

10.3 Scalability in CI OpenVMS Clusters

Each CI star coupler can have up to 32 nodes attached; 16 can be systems and the rest can be storage controllers and storage. Figure 10-2, Figure 10-3, and Figure 10-4 show a progression from a two-node CI OpenVMS Cluster to a seven-node CI OpenVMS Cluster.

10.3.1 Two-Node CI OpenVMS Cluster

In Figure 10-2, two nodes have shared, direct access to storage that includes a quorum disk. The VAX and Alpha systems each have their own system disks.

Figure 10-2 Two-Node CI OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-2 include:

Advantages

  • All nodes have shared, direct access to all storage.
  • As the nodes and storage in this configuration grow, all nodes can still have shared, direct access to storage.
  • The MSCP server is enabled for failover to the LAN interconnect in case the CI fails. Enabling the MSCP server also allows you to add satellites.
  • This configuration has the lowest cost of the CI configurations shown in this section.

Disadvantage

  • The single HSJ/HSC is a potential bottleneck and single point of failure.

An increased need for more storage or processing resources could lead to an OpenVMS Cluster configuration like the one shown in Figure 10-3.

10.3.2 Three-Node CI OpenVMS Cluster

In Figure 10-3, three nodes are connected to two HSC controllers by the CI interconnects. The critical system disk is dual ported and shadowed.

Figure 10-3 Three-Node CI OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-3 include:

Advantages

  • All nodes have shared, direct access to all storage.
  • As the nodes and storage in this configuration grow, all nodes can still have shared, direct access to storage.
  • The MSCP server is enabled for failover to the LAN interconnect, in case the CI fails. Enabling the MSCP server also allows you to add satellites.
  • Volume shadowed, dual-ported disks increase data availability.

Disadvantage

  • The HSJs/HSCs are potential bottlenecks.

If the I/O activity exceeds the capacity of the CI interconnect, this could lead to an OpenVMS Cluster configuration like the one shown in Figure 10-4.

10.3.3 Seven-Node CI OpenVMS Cluster

In Figure 10-4, seven nodes each have a direct connection to two star couplers and to all storage.

Figure 10-4 Seven-Node CI OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-4 include:

Advantages

  • All nodes have shared, direct access to all storage.
  • This configuration has more than double the storage, processing, and CI interconnect capacity than a configuration like the one shown in Figure 10-3.
  • Two CI interconnects between processors and storage provide twice the communication performance of one path.
  • Volume shadowed, dual-ported disks increase data availability.

Disadvantage

  • This configuration is complex and requires experienced personnel to configure, tune, and manage it properly.

10.3.4 Guidelines for CI OpenVMS Clusters

The following guidelines can help you configure your CI OpenVMS Cluster:

  • Every system should have shared, direct access to all storage.
  • In a CI OpenVMS Cluster larger than four nodes, use a second system disk for increased system performance.
    Reference: For more information on system disks, see Section 11.2.
  • Enable your systems, interconnects, and storage to work to their full capacity by eliminating bottlenecks. If any of these components is not able to handle the I/O capacity, none of the other components will work at its best. Ensure that the sum of all the I/O on your nodes is less than or equal to your CI capacity and to your storage capacity. In calculating the sum of the I/O on your nodes, factor in 5--10% extra for lock manager internode communications.
    In general, use the following rules of thumb:
    • The sum of all the I/Os on your nodes plus internode communications should be less than or equal to the sum of your CI capacity.
    • Your CI capacity should be less than or equal to the sum of your storage capacity.

10.3.5 Guidelines for Volume Shadowing in CI OpenVMS Clusters

Volume shadowing is intended to enhance availability, not performance. However, the following volume shadowing strategies enable you to utilize availability features while also maximizing I/O capacity. These examples show CI configurations, but they apply to DSSI and SCSI configurations, as well.

Figure 10-5 Volume Shadowing on a Single Controller


Figure 10-5 shows two nodes connected to an HSJ, with a two-member shadow set.

The disadvantage of this strategy is that the controller is a single point of failure. The configuration in Figure 10-6 shows examples of shadowing across controllers, which prevents one controller from being a single point of failure. Shadowing across HSJ and HSC controllers provides optimal scalability and availability within an OpenVMS Cluster system.

Figure 10-6 Volume Shadowing Across Controllers


As Figure 10-6 shows, shadowing across controllers has three variations:

  • Strategy A shows each volume in the shadow set attached to separate controllers. This configuration is not optimal because each volume is not attached to each controller.
  • Strategy B shows dual-ported devices that have two paths to the volumes through separate controllers. This strategy is an optimal variation because two HSC controllers have direct access to a single storage device.
  • Strategy C shows HSJ controllers shadowed across SCSI buses. It also is an optimal variation because two HSJ controllers have direct access to a single storage device.

Figure 10-7 shows an example of shadowing across nodes.

Figure 10-7 Volume Shadowing Across Nodes


As Figure 10-7 shows, shadowing across nodes provides the advantage of flexibility in distance. However, it requires MSCP server overhead for write I/Os. In addition, the failure of one of the nodes and its subsequent return to the OpenVMS Cluster will cause a copy operation.

If you have multiple volumes, shadowing inside a controller and shadowing across controllers are more effective than shadowing across nodes.

Reference: See Volume Shadowing for OpenVMS for more information.

10.4 Scalability in DSSI OpenVMS Clusters

Each DSSI interconnect can have up to eight nodes attached; four can be systems and the rest can be storage devices. Figure 10-8, Figure 10-9, and Figure 10-10 show a progression from a two-node DSSI OpenVMS Cluster to a four-node DSSI OpenVMS Cluster.

10.4.1 Two-Node DSSI OpenVMS Cluster

In Figure 10-8, two nodes are connected to four disks by a common DSSI interconnect.

Figure 10-8 Two-Node DSSI OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-8 include:

Advantages

  • Both nodes have shared, direct access to all storage.
  • The Ethernet LAN ensures failover capability if the DSSI interconnect fails.

Disadvantages

  • The amount of storage that is directly accessible to all nodes is limited.
  • A single DSSI interconnect can become a single point of failure.

If the OpenVMS Cluster in Figure 10-8 required more processing power, more storage, and better redundancy, this could lead to a configuration like the one shown in Figure 10-9.

10.4.2 Four-Node DSSI OpenVMS Cluster with Shared Access

In Figure 10-9, four nodes have shared, direct access to eight disks through two DSSI interconnects. Two of the disks are shadowed across DSSI interconnects.

Figure 10-9 Four-Node DSSI OpenVMS Cluster with Shared Access


The advantages and disadvantages of the configuration shown in Figure 10-9 include:

Advantages

  • All nodes have shared, direct access to all storage.
  • The Ethernet LAN ensures failover capability if the DSSI interconnect fails.
  • Shadowing across DSSI interconnects provides increased performance and availability.

Disadvantage

  • The amount of storage that is directly accessible to all nodes is limited.

If the configuration in Figure 10-9 required more storage, this could lead to a configuration like the one shown in Figure 10-10.

10.4.3 Four-Node DSSI OpenVMS Cluster with Some Nonshared Access

Figure 10-10 shows an OpenVMS Cluster with 4 nodes and 10 disks. This model differs from Figure 10-8 and Figure 10-9 in that some of the nodes do not have shared, direct access to some of the disks, thus requiring these disks to MSCP served. For the best performance, place your highest-priority data on disks that are directly connected by common DSSI interconnects to your nodes. Volume shadowing across common DSSI interconnects provides the highest availability and may increase read performance.

Figure 10-10 DSSI OpenVMS Cluster with 10 Disks


The advantages and disadvantages of the configuration shown in Figure 10-10 include:

Advantages

  • All nodes have shared, direct access to most of the storage.
  • The MSCP server is enabled to allow failover to the alternate DSSI interconnect if one of the DSSI interconnects fails.
  • Shadowing across DSSI interconnects provides increased performance and availability.
  • The SCSI storage connected through the HSZ controllers provides good performance and scalability.

Disadvantages

  • The amount of storage that is directly accessible to all nodes is limited.
  • Shadow set 2 requires MSCP serving to coordinate shadowing activity.
  • Some nodes do not have direct access to storage. For example, Alpha 2 and Alpha 4 do not have direct access to disks connected to Alpha 1 and Alpha 3.

10.5 Scalability in MEMORY CHANNEL OpenVMS Clusters

Each MEMORY CHANNEL (MC) interconnect can have up to four nodes attached to each MEMORY CHANNEL hub. For two-hub configurations, each node must have two PCI adapters, and each adapter must be attached to a different hub. In a two-node configuration, no hub is required because one of the PCI adapters serves as a virtual hub.

Figure 10-11, Figure 10-12, and Figure 10-13 show a progression from a two-node MEMORY CHANNEL cluster to a four-node MEMORY CHANNEL cluster.

Reference: For additional configuration information and a more detailed technical summary of how MEMORY CHANNEL works, see Appendix B.

10.5.1 Two-Node MEMORY CHANNEL Cluster

In Figure 10-11, two nodes are connected by a MEMORY CHANNEL interconnect, a LAN (Ethernet, FDDI, or ATM) interconnect, and a Fibre Channel interconnect.

Figure 10-11 Two-Node MEMORY CHANNEL OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-11 include:

Advantages

  • Both nodes have shared, direct access to all storage.
  • The Ethernet/FDDI/ATM interconnect enables failover if the MEMORY CHANNEL interconnect fails.
  • The limit of two MEMORY CHANNEL nodes means that no hub is required; one PCI adapter serves as a virtual hub.

Disadvantages

  • The amount of storage that is directly accessible to all nodes is limited.
  • A single SCSI interconnect or HSZ controller can become a single point of failure.

If the OpenVMS Cluster in Figure 10-11 required more processing power and better redundancy, this could lead to a configuration like the one shown in Figure 10-12.

10.5.2 Three-Node MEMORY CHANNEL Cluster

In Figure 10-12, three nodes are connected by a high-speed MEMORY CHANNEL interconnect, as well as by a LAN (Ethernet, FDDI, or ATM) interconnect. These nodes also have shared, direct access to storage through the Fibre Channel interconnect.

Figure 10-12 Three-Node MEMORY CHANNEL OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-12 include:

Advantages

  • All nodes have shared, direct access to storage.
  • The Ethernet/FDDI/ATM interconnect enables failover if the MEMORY CHANNEL interconnect fails.
  • The addition of a MEMORY CHANNEL hub increases the limit on the number of nodes to a total of four.

Disadvantage

  • The amount of storage that is directly accessible to all nodes is limited.

If the configuration in Figure 10-12 required more storage, this could lead to a configuration like the one shown in Figure 10-13.

10.5.3 Four-Node MEMORY CHANNEL OpenVMS Cluster

Figure 10-13, each node is connected by a MEMORY CHANNEL interconnect as well as by a CI interconnect.

Figure 10-13 MEMORY CHANNEL Cluster with a CI Cluster


The advantages and disadvantages of the configuration shown in Figure 10-13 include:

Advantages

  • All nodes have shared, direct access to all of the storage.
  • This configuration has more than double the storage and processing capacity as the one shown in Figure 10-12.
  • If the MEMORY CHANNEL interconnect fails, the CI can take over internode communication.
  • The CIPCA adapters on two of the nodes enable the addition of Alpha systems to a CI cluster that formerly comprised VAX (CIXCD-based) systems.
  • Multiple CIs between processors and storage provide twice the performance of one path. Bandwidth further increases because MEMORY CHANNEL offloads internode traffic from the CI, enabling the CI to be devoted only to storage traffic. This improves the performance of the entire cluster.
  • Volume shadowed, dual-ported disks increase data availability.

Disadvantage

  • This configuration is complex and requires the care of an experienced system manager.

10.6 Scalability in SCSI OpenVMS Clusters

SCSI-based OpenVMS Clusters allow commodity-priced storage devices to be used directly in OpenVMS Clusters. Using a SCSI interconnect in an OpenVMS Cluster offers you variations in distance, price, and performance capacity. This SCSI clustering capability is an ideal starting point when configuring a low-end, affordable cluster solution. SCSI clusters can range from desktop to deskside to departmental and larger configurations.

Note the following general limitations when using the SCSI interconnect:

  • Because the SCSI interconnect handles only storage traffic, it must always be paired with another interconnect for node-to-node traffic. In the figures shown in this section, MEMORY CHANNEL is the alternate interconnect; but CI, DSSI, Ethernet, and FDDI could also be used.
  • Total SCSI cable lengths must take into account the system's internal cable length. For example, an AlphaServer 1000 rackmount uses 1.6 m of internal cable to connect the internal adapter to the external connector. Two AlphaServer 1000s joined by a 2 m SCSI cable would use 1.6 m within each system, resulting in a total SCSI bus length of 5.2 m.
    Reference: For more information about internal SCSI cable lengths as well as highly detailed information about clustering SCSI devices, see Appendix A.

The figures in this section show a progression from a two-node SCSI configuration with modest storage to a four-node SCSI hub configuration with maximum storage and further expansion capability.

10.6.1 Two-Node Fast-Wide SCSI Cluster

In Figure 10-14, two nodes are connected by a 25-m, fast-wide differential (FWD) SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. The BA356 storage cabinet contains a power supply, a DWZZB single-ended to differential converter, and six disk drives. This configuration can have either narrow or wide disks.

Figure 10-14 Two-Node Fast-Wide SCSI Cluster


The advantages and disadvantages of the configuration shown in Figure 10-14 include:

Advantages

  • Low cost SCSI storage is shared by two nodes.
  • With the BA356 cabinet, you can use a narrow (8 bit) or wide (16 bit) SCSI bus.
  • The DWZZB converts single-ended signals to differential.
  • The fast-wide SCSI interconnect provides 20 MB/s performance.
  • MEMORY CHANNEL handles internode traffic.
  • The differential SCSI bus can be 25 m.

Disadvantage

  • Somewhat limited storage capability.

If the configuration in Figure 10-14 required even more storage, this could lead to a configuration like the one shown in Figure 10-15.

10.6.2 Two-Node Fast-Wide SCSI Cluster with HSZ Storage

In Figure 10-15, two nodes are connected by a 25-m, fast-wide differential (FWD) SCSI bus, with MEMORY CHANNEL (or any) interconnect for internode traffic. Multiple storage shelves are within the HSZ controller.

Figure 10-15 Two-Node Fast-Wide SCSI Cluster with HSZ Storage


The advantages and disadvantages of the configuration shown in Figure 10-15 include:

Advantages

  • Costs slightly more than the configuration shown in Figure 10-14, but offers significantly more storage. (The HSZ controller enables you to add more storage.)
  • Cache in the HSZ, which also provides RAID 0, 1, and 5 technologies. The HSZ is a differential device; no converter is needed.
  • MEMORY CHANNEL handles internode traffic.
  • The FWD bus provides 20 MB/s throughput.
  • Includes a 25 m differential SCSI bus.

Disadvantage

  • This configuration is more expensive than the one shown in Figure 10-14.

10.6.3 Three-Node Fast-Wide SCSI Cluster

In Figure 10-16, three nodes are connected by two 25-m, fast-wide (FWD) SCSI interconnects. Multiple storage shelves are contained in each HSZ controller, and more storage is contained in the BA356 at the top of the figure.

Figure 10-16 Three-Node Fast-Wide SCSI Cluster


The advantages and disadvantages of the configuration shown in Figure 10-16 include:

Advantages

  • Combines the advantages of the configurations shown in Figure 10-14 and Figure 10-15:
    • Significant (25 m) bus distance and scalability.
    • Includes cache in the HSZ, which also provides RAID 0, 1, and 5 technologies. The HSZ contains multiple storage shelves.
    • FWD bus provides 20 MB/s throughput.
    • With the BA356 cabinet, you can use narrow (8 bit) or wide (16 bit) SCSI bus.

Disadvantage

  • This configuration is more expensive than those shown in previous figures.

10.6.4 Four-Node Ultra SCSI Hub Configuration

Figure 10-17 shows four nodes connected by a SCSI hub. The SCSI hub obtains power and cooling from the storage cabinet, such as the BA356. The SCSI hub does not connect to the SCSI bus of the storage cabinet.

Figure 10-17 Four-Node Ultra SCSI Hub Configuration


The advantages and disadvantages of the configuration shown in Figure 10-17 include:

Advantages

  • Provides significantly more bus distance and scalability than the configuration shown in Figure 10-15.
  • The SCSI hub provides fair arbitration on the SCSI bus. This provides more uniform, predictable system behavior. Four CPUs are allowed only when fair arbitration is enabled.
  • Up to two dual HSZ controllers can be daisy-chained to the storage port of the hub.
  • Two power supplies in the BA356 (one for backup).
  • Cache in the HSZs, which also provides RAID 0, 1, and 5 technologies.
  • Ultra SCSI bus provides 40 MB/s throughput.

Disadvantage

  • You cannot add CPUs to this configuration by daisy-chaining a SCSI interconnect from a CPU or HSZ to another CPU.
  • This configuration is more expensive than those shown in Figure 10-14 and Figure 10-15.
  • Only HSZ storage can be connected. You cannot attach a storage shelf with disk drives directly to the SCSI hub.

10.7 Scalability in OpenVMS Clusters with Satellites

The number of satellites in an OpenVMS Cluster and the amount of storage that is MSCP served determine the need for the quantity and capacity of the servers. Satellites are systems that do not have direct access to a system disk and other OpenVMS Cluster storage. Satellites are usually workstations, but they can be any OpenVMS Cluster node that is served storage by other nodes in the OpenVMS Cluster.

Each Ethernet LAN segment should have only 10 to 20 satellite nodes attached. Figure 10-18, Figure 10-19, Figure 10-20, and Figure 10-21 show a progression from a 6-satellite LAN to a 45-satellite LAN.

10.7.1 Six-Satellite OpenVMS Cluster

In Figure 10-18, six satellites and a boot server are connected by Ethernet.

Figure 10-18 Six-Satellite LAN OpenVMS Cluster


The advantages and disadvantages of the configuration shown in Figure 10-18 include:

Advantages

  • The MSCP server is enabled for adding satellites and allows access to more storage.
  • With one system disk, system management is relatively simple.
    Reference: For information about managing system disks, see Section 11.2.

Disadvantage

  • The Ethernet is a potential bottleneck and a single point of failure.

If the boot server in Figure 10-18 became a bottleneck, a configuration like the one shown in Figure 10-19 would be required.


Previous Next Contents Index