HP OpenVMS Systems Documentation |
Guidelines for OpenVMS Cluster Configurations
4.11.1 Multiple LAN AdaptersMultiple LAN adapters are supported. The adapters can be for different LAN types or for different adapter models for the same LAN type. Multiple LAN adapters can be used to provide the following:
4.11.1.1 Multiple LAN Path Load DistributionWhen multiple node-to-node LAN paths are available, the OpenVMS Cluster software chooses the set of paths to use based on the following criteria, which are evaluated in strict precedence order:
Packet transmissions are distributed in round-robin fashion across all
communication paths between local and remote adapters that meet the
preceding criteria.
Because LANs are ideal for spanning great distances, you may want to supplement an intersite link's throughput with high availability. You can do this by configuring critical nodes with multiple LAN adapters, each connected to a different intersite LAN link.
A common cause of intersite link failure is mechanical destruction of
the intersite link. This can be avoided by path diversity, that is,
physically separating the paths of the multiple intersite links. Path
diversity helps to ensure that the configuration is unlikely to be
affected by disasters affecting an intersite link.
The following guidelines apply to all LAN-based OpenVMS Cluster systems:
4.11.3 Ethernet (10/100) and Gigabit Ethernet AdvantagesThe Ethernet (10/100) interconnect is typically the lowest cost of all OpenVMS Cluster interconnects. Gigabit Ethernet interconnects offer the following advantages in addition to the advantages listed in Section 4.11:
4.11.4 Ethernet (10/100) and Gigabit Ethernet ThroughputThe Ethernet technology offers a range of baseband transmission speeds:
Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI. Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.
Reference: For LAN configuration guidelines, see
Section 4.11.2.
The following Ethernet adapters and their internal buses are supported in an OpenVMS Cluster configuration:
Reference:: For detailed information about the Ethernet adapters supported on each AlphaServer system, go to the OpenVMS web page at:
Select AlphaSystems (from the left navigation panel under related
links). Next, select the AlphaServer system of interest and then its
QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe
all options, including the adapters, supported on that system.
You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called 10/100 bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats. Reference: See Figure 10-21 for an example of these bridges.
You can use switches to isolate traffic and to aggregate bandwidth,
which can result in greater throughput.
Use the following guidelines when configuring systems in a Gigabit Ethernet cluster:
4.11.8 ATM AdvantagesATM offers the following advantages, in addition to those listed in Section 4.11:
4.11.9 ATM Throughput
The ATM interconnect transmits up to 622 Mb/s. The adapter that
supports this throughput is the DAPCA.
ATM adapters supported in an OpenVMS Cluster system and the internal buses on which they are supported are shown in the following list:
4.12 Fiber Distributed Data Interface (FDDI)
FDDI is an ANSI standard LAN interconnect that uses fiber-optic or
copper cable.
FDDI offers the following advantages in addition to the LAN advantages listed in Section 4.11:
4.12.2 FDDI Node TypesThe FDDI standards define the following two types of nodes:
4.12.3 FDDI Distance
FDDI limits the total fiber path to 200 km (125 miles). The maximum
distance between adjacent FDDI devices is 40 km with single-mode fiber
and 2 km with multimode fiber. In order to control communication delay,
however, it is advisable to limit the maximum distance between any two
OpenVMS Cluster nodes on an FDDI ring to 40 km.
The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet. In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.
Because FDDI adapters do not provide processing assistance for OpenVMS
Cluster protocols, more processing power is required than for CI or
DSSI.
Following is a list of supported FDDI adapters and the buses they support:
Reference: For detailed information about the adapters supported on each AlphaServer system, go to the OpenVMS web page at:
Select AlphaSystems (from the left navigation panel under related
links). Next, select the AlphaServer system of interest and then its
QuickSpecs. The QuickSpecs for each AlphaServer system briefly describe
all options, including the adapters, supported on that system.
FDDI-based configurations use FDDI for node-to-node communication. The HS1xx and HS2xx family of storage servers provide FDDI-based storage access to OpenVMS Cluster nodes.
Chapter 5
|
Storage Interconnect | Storage Devices |
---|---|
CI | HSJ and HSC controllers and SCSI storage |
DSSI | HSD controllers, ISEs, and SCSI storage |
SCSI | HSZ controllers and SCSI storage |
Fibre Channel | HSG and HSV controllers and SCSI storage |
FDDI | HS xxx controllers and SCSI storage |
5.1.3 How Floor Space Affects Storage Choices
If the cost of floor space is high and you want to minimize the floor
space used for storage devices, consider these options:
Storage capacity is the amount of space needed on storage devices to
hold system, application, and user files. Knowing your storage capacity
can help you to determine the amount of storage needed for your OpenVMS
Cluster configuration.
5.2.1 Estimating Disk Capacity Requirements
To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.
Software Component | Description |
---|---|
OpenVMS operating system |
Estimate the number of blocks
1 required by the OpenVMS operating system.
Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information. |
Page, swap, and dump files |
Use AUTOGEN to determine the amount of disk space required for page,
swap, and dump files.
Reference: The HP OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes. |
Site-specific utilities and data | Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files. |
Application programs |
Estimate the space required for each application to be installed on
your OpenVMS Cluster system, using information from the application
suppliers.
Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use. |
User-written programs | Estimate the space required for user-written programs and their associated databases. |
Databases | Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database. |
User data |
Estimate user disk-space requirements according to these guidelines:
|
Total requirements | The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration. |
Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.
For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.
To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.
Planning for adequate backup storage capacity can make archiving
procedures more effective and reduce the capacity requirements for
online storage.
5.3 Choosing Disk Performance Optimizers
Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.
You can use the Monitor utility and DECamds to help you determine which
performance optimizer best meets your application and business needs.
5.3.1 Performance Optimizers
Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.
Optimizer | Description |
---|---|
DECram for OpenVMS | A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1 |
Solid-state disks | In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data. |
Disk striping |
Disk striping (RAID level 0) lets applications access an array of disk
drives in parallel for higher throughput. Disk striping works by
grouping several disks into a "stripe set" and then dividing
the application data into "chunks" that are spread equally
across the disks in the stripe set in a round-robin fashion.
By reducing access time, disk striping can improve performance, especially if the application:
Two independent types of disk striping are available:
Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets, and you can shadow host-based disk stripe sets. |
Extended file cache (XFC) | OpenVMS Alpha Version 7.3 offers improved host-based caching with XFC, which can replace and can coexist with virtual I/O cache (VIOC). XFC is a clusterwide, file-system data cache that offers several features not available with VIOC, including read-ahead caching and automatic resizing of the cache to improve performance. |
Controllers with disk cache | Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSC, HSJ, HSD, HSZ, and HSG controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller. |
Reference: See Section 10.8 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.
Previous | Next | Contents | Index |