![]() |
![]() HP OpenVMS Systems Documentation |
![]() |
Guidelines for OpenVMS Cluster Configurations
10.7.2 Six-Satellite OpenVMS Cluster with Two Boot NodesFigure 10-19 shows six satellites and two boot servers connected by Ethernet. Boot server 1 and boot server 2 perform MSCP server dynamic load balancing: they arbitrate and share the work load between them and if one node stops functioning, the other takes over. MSCP dynamic load balancing requires shared access to storage. Figure 10-19 Six-Satellite LAN OpenVMS Cluster with Two Boot Nodes ![]() The advantages and disadvantages of the configuration shown in Figure 10-19 include:
If the LAN in Figure 10-19 became an OpenVMS Cluster bottleneck, this
could lead to a configuration like the one shown in Figure 10-20.
Figure 10-20 shows 12 satellites and 2 boot servers connected by two Ethernet segments. These two Ethernet segments are also joined by a LAN bridge. Because each satellite has dual paths to storage, this configuration also features MSCP dynamic load balancing. Figure 10-20 Twelve-Satellite OpenVMS Cluster with Two LAN Segments ![]() The advantages and disadvantages of the configuration shown in Figure 10-20 include:
If the OpenVMS Cluster in Figure 10-20 needed to grow beyond its
current limits, this could lead to a configuration like the one shown
in Figure 10-21.
Figure 10-21 shows a large, 51-node OpenVMS Cluster that includes 45 satellite nodes. The three boot servers, Alpha 1, Alpha 2, and Alpha 3, share three disks: a common disk, a page and swap disk, and a system disk. The FDDI ring has three LAN segments attached. Each segment has 15 workstation satellites as well as its own boot node. Figure 10-21 Forty-Five Satellite OpenVMS Cluster with FDDI Ring ![]() The advantages and disadvantages of the configuration shown in Figure 10-21 include:
10.7.5 High-Powered Workstation OpenVMS ClusterFigure 10-22 shows an OpenVMS Cluster configuration that provides high performance and high availability on the FDDI ring. Figure 10-22 High-Powered Workstation Server Configuration ![]()
In Figure 10-22, several Alpha workstations, each with its own system
disk, are connected to the FDDI ring. Putting Alpha workstations on the
FDDI provides high performance because each workstation has direct
access to its system disk. In addition, the FDDI bandwidth is higher
than that of the Ethernet. Because Alpha workstations have FDDI
adapters, putting these workstations on an FDDI is a useful alternative
for critical workstation requirements. FDDI is 10 times faster than
Ethernet, and Alpha workstations have processing capacity that can take
advantage of FDDI's speed.
The following are guidelines for setting up an OpenVMS Cluster with satellites:
10.7.7 Extended LAN Configuration GuidelinesYou can use bridges between LAN segments to form an extended LAN (ELAN). This can increase availability, distance, and aggregate bandwidth as compared with a single LAN. However, an ELAN can increase delay and can reduce bandwidth on some paths. Factors such as packet loss, queuing delays, and packet size can also affect ELAN performance. Table 10-3 provides guidelines for ensuring adequate LAN performance when dealing with such factors.
10.7.8 System Parameters for OpenVMS ClustersIn an OpenVMS Cluster with satellites and servers, specific system parameters can help you manage your OpenVMS Cluster more efficiently. Table 10-4 gives suggested values for these system parameters.
1Correlate with bridge timers and LAN utilization.
Reference: For a more in-depth description of these
parameters, see OpenVMS Cluster Systems.
The ability to scale I/Os is an important factor in the growth of your OpenVMS Cluster. Adding more components to your OpenVMS Cluster requires high I/O throughput so that additional components do not create bottlenecks and decrease the performance of the entire OpenVMS Cluster. Many factors can affect I/O throughput:
These factors can affect I/O scalability either singly or in combination. The following sections explain these factors and suggest ways to maximize I/O throughput and scalability without having to change in your application. Additional factors that affect I/O throughput are types of interconnects and types of storage subsystems.
Reference: See Chapter 4 for more information
about interconnects and Chapter 5 for more information about types
of storage subsystems.
MSCP server capability provides a major benefit to OpenVMS Clusters: it enables communication between nodes and storage that are not directly connected to each other. However, MSCP served I/O does incur overhead. Figure 10-23 is a simplification of how packets require extra handling by the serving system. Figure 10-23 Comparison of Direct and MSCP Served Access ![]() In Figure 10-23, an MSCP served packet requires an extra "stop" at another system before reaching its destination. When the MSCP served packet reaches the system associated with the target storage, the packet is handled as if for direct access.
In an OpenVMS Cluster that requires a large amount of MSCP serving, I/O
performance is not as efficient and scalability is decreased. The total
I/O throughput is approximately 20% less when I/O is MSCP served than
when it has direct access. Design your configuration so that a few
large nodes are serving many satellites rather than satellites serving
their local storage to the entire OpenVMS Cluster.
In recent years, the ability of CPUs to process information has far outstripped the ability of I/O subsystems to feed processors with data. The result is an increasing percentage of processor time spent waiting for I/O operations to complete. Solid-state disks (SSDs), DECram, and RAID level 0 bridge this gap between processing speed and magnetic-disk access speed. Performance of magnetic disks is limited by seek and rotational latencies, while SSDs and DECram use memory, which provides nearly instant access. RAID level 0 is the technique of spreading (or "striping") a single file across several disk volumes. The objective is to reduce or eliminate a bottleneck at a single disk by partitioning heavily accessed files into stripe sets and storing them on multiple devices. This technique increases parallelism across many disks for a single I/O. Table 10-5 summarizes disk technologies and their features.
Note: Shared, direct access to a solid-state disk or
to DECram is the fastest alternative for scaling I/Os.
The read/write ratio of your applications is a key factor in scaling I/O to shadow sets. MSCP writes to a shadow set are duplicated on the interconnect. Therefore, an application that has 100% (100/0) read activity may benefit from volume shadowing because shadowing causes multiple paths to be used for the I/O activity. An application with a 50/50 ratio will cause more interconnect utilization because write activity requires that an I/O be sent to each shadow member. Delays may be caused by the time required to complete the slowest I/O.
To determine I/O read/write ratios, use the DCL command MONITOR IO.
Each I/O packet incurs processor and memory overhead, so grouping I/Os
together in one packet decreases overhead for all I/O activity. You can
achieve higher throughput if your application is designed to use bigger
packets. Smaller packets incur greater overhead.
Caching is the technique of storing recently or frequently used data in an area where it can be accessed more easily---in memory, in a controller, or in a disk. Caching complements solid-state disks, DECram, and RAID. Applications automatically benefit from the advantages of caching without any special coding. Caching reduces current and potential I/O bottlenecks within OpenVMS Cluster systems by reducing the number of I/Os between components. Table 10-6 describes the three types of caching.
Host-based disk caching provides different benefits from
controller-based and disk-based caching. In host-based disk caching,
the cache itself is not shareable among nodes. Controller-based and
disk-based caching are shareable because they are located in the
controller or disk, either of which is shareable.
A "hot" file is a file in your system on which the most activity occurs. Hot files exist because, in many environments, approximately 80% of all I/O goes to 20% of data. This means that, of equal regions on a disk drive, 80% of the data being transferred goes to one place on a disk, as shown in Figure 10-24. Figure 10-24 Hot-File Distribution ![]() To increase the scalability of I/Os, focus on hot files, which can become a bottleneck if you do not manage them well. The activity in this area is expressed in I/Os, megabytes transferred, and queue depth. RAID level 0 balances hot-file activity by spreading a single file over multiple disks. This reduces the performance impact of hot files. Use the following DCL commands to analyze hot-file activity:
The MONITOR IO and the MONITOR MSCP commands enable you to find out which disk and which server are hot.
|