HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

D.4 Managing OpenVMS Cluster Systems Across Multiple Sites

In general, you manage a multiple-site OpenVMS Cluster using the same tools and techniques that you would use for any OpenVMS Cluster interconnected by a LAN. The following sections describe some additional considerations and recommends some system management tools and techniques.

The following table lists system management considerations specific to multiple-site OpenVMS Cluster systems.

Problem Possible Solution
Multiple-site configurations present an increased probability of the following failure modes:
  • OpenVMS Cluster quorum loss resulting from site-to-site communication link failure.
  • Site loss resulting from power failure or other breakdown can affect all systems at that site.
Assign votes so that one preferred site has sufficient votes to maintain quorum and to continue operation if the site-to-site communication link fails or if the other site is unavailable. Select the site with the most critical applications as the primary site. Sites with a few noncritical systems or satellites probably should not have sufficient votes to continue.
Users expect that the local resources will either continue to be available or will rapidly become available after such a failure. This might not always be the case. Consider the following options for setting user expectations:
  • Set management and user expectations regarding the likely effects of failures, and consider training remote users in the procedures to be followed at a remote site when the system becomes unresponsive because of quorum loss or other problems.
  • Develop management policies and procedures for what actions will be taken to identify and handle these failure modes. These procedures may include manually adjusting quorum to allow a site to continue.

D.4.1 Methods and Tools

You can use the following system management methods and tools to manage both remote and local nodes:

  • There are two options for remote-site console access when you use an intersite link through a DECserver in reverse LAT mode.
    • Use the following tools to connect remote consoles:
      • SET HOST/LAT command
      • POLYCENTER Console Manager
      • OpenVMS Cluster Console System (VCS)
      • Disaster Tolerant Cluster Services for OpenVMS, a Compaq system management and software package
    • Use a modem to dial up the remote system consoles.
  • An alternative to remote-site console access is to have a system manager at each site.
  • To enable device and processor control commands to take effect across all nodes in an OpenVMS Cluster system, use the System Management utility (SYSMAN) that is supplied with the OpenVMS operating system.

D.4.2 Shadowing Data

Volume Shadowing for OpenVMS allows you to shadow data volumes across multiple sites. System disks can be members of a volume shadowing or RAID set within a site; however, use caution when configuring system disk shadow set members in multiple sites. This is because it may be necessary to boot off a remote system disk shadow set member after a failure. If your system does not support FDDI booting, it will not be possible to do this.

See the Software Product Descriptions (SPDs) for complete and up-to-date details about Volume Shadowing for OpenVMS (SPD 27.29.xx) and StorageWorks RAID for OpenVMS (SPD 46.49.xx).

D.4.3 Monitoring Performance

Monitor performance for multiple-site OpenVMS Cluster systems as follows:

  • Monitor the virtual circuit (VC) packet-loss count and round-trip time values using the System Dump Analyzer (SDA). The procedures for doing this are documented in OpenVMS Cluster Systems.
  • Monitor the intersite link bit error ratio (BER) and packet loss using network management tools. You can use tools such as POLYCENTER NetView or DECmcc to access the GIGAswitch and WAN T3/SONET option card's management information and to set alarm thresholds. See the GIGAswitch, WAN T3/SONET card, POLYCENTER, and DECmcc documentation, as appropriate.


Index Contents