HP OpenVMS Cluster Systems

HP OpenVMS Cluster Systems

Order Number: BA554-90021


June 2010

This manual describes cluster concepts, procedures, and guidelines for configuring and managing OpenVMS Cluster systems. Except where noted, the procedures and guidelines apply equally to Integrity servers and Alpha systems. This manual also includes information for providing high availability, building-block growth, and unified system management across coupled systems.

Revision/Update Information: This manual supersedes HP OpenVMS Cluster Systems, OpenVMS Alpha Version 7.3--1 and OpenVMS VAX Version 7.3.

Software Version: OpenVMS Version 8.4 for Integrity servers
OpenVMS Alpha Version 8.4




Hewlett-Packard Company
Palo Alto, California


© Copyright 2010 Hewlett-Packard Development Company, L.P.

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendors standard commercial license.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Microsoft and Windows are US registered trademarks of Microsoft Corporation.

Printed in the US

ZK4477

The HP OpenVMS documentation set is available on CD-ROM.

Contents Index


Preface

Introduction

HP OpenVMS Cluster Systems describe system management for OpenVMS Cluster systems. Although the OpenVMS Cluster software for Integrity servers and Alpha computers is separately purchased, licensed, and installed, the difference between the two architectures lies mainly in the hardware used. Essentially, system management for Integrity servers and Alpha computers in an OpenVMS Cluster is identical. Exceptions are pointed out.

Note

This manual is applicable only for a combination of Integrity server systems and Alpha systems. For Alpha and VAX or Alpha systems combination, see the previous version of the manual.

Who Should Use This Manual

This document is intended for anyone responsible for setting up and managing OpenVMS Cluster systems. To use the document as a guide to cluster management, you must have a thorough understanding of system management concepts and procedures, as described in the HP OpenVMS System Manager's Manual.

How This Manual Is Organized

HP OpenVMS Cluster Systems contains ten chapters and seven appendixes.

Chapter 1 introduces OpenVMS Cluster systems.

Chapter 2 presents the software concepts integral to maintaining OpenVMS Cluster membership and integrity.

Chapter 3 describes various OpenVMS Cluster configurations and the ways they are interconnected.

Chapter 4 explains how to set up an OpenVMS Cluster system and coordinate system files.

Chapter 5 explains how to set up an environment in which resources can be shared across nodes in the OpenVMS Cluster system.

Chapter 6 discusses disk and tape management concepts and procedures and how to use Volume Shadowing for OpenVMS to prevent data unavailability.

Chapter 7 discusses queue management concepts and procedures.

Chapter 8 explains how to build an OpenVMS Cluster system once the necessary preparations are made, and how to reconfigure and maintain the cluster.

Chapter 9 provides guidelines for configuring and building large OpenVMS Cluster systems, booting satellite nodes, and cross-architecture booting.

Chapter 10 describes ongoing OpenVMS Cluster system maintenance.

Appendix A lists and defines OpenVMS Cluster system parameters.

Appendix B provides guidelines for building a cluster common user authorization file.

Appendix C provides troubleshooting information.

Appendix D presents three sample programs for LAN control and explains how to use the Local Area OpenVMS Cluster Network Failure Analysis Program.

Appendix E describes the subroutine package used with local area OpenVMS Cluster sample programs.

Appendix F provides techniques for troubleshooting network problems related to the NISCA transport protocol.

Appendix G describes how the interactions of workload distribution and network topology affect OpenVMS Cluster system performance, and discusses transmit channel selection by PEDRIVER.

Related Documents

This document is not a one-volume reference manual. The utilities and commands are described in detail in the HP OpenVMS System Manager's Manual, the HP OpenVMS System Management Utilities Reference Manual, and the HP OpenVMS DCL Dictionary.

For additional information on the topics covered in this manual, see the following documents:

For additional information about HP OpenVMS products and services, see:

http://www.hp.com/go/openvms

Note

1 This manual has been archived but is available on the OpenVMS Documentation CD-ROM.

Reader's Comments

HP welcomes your comments on this manual. Please send your comments or suggestions to:

openvmsdoc@hp.com

How To Order Additional Documentation

For information about how to order additional documentation, see:

http://www.hp.com/go/openvms/doc/order

Conventions

The following conventions are used in this manual:
[Return] In examples, a key name enclosed in a box indicates that you press a key on the keyboard. (In text, a key name is not enclosed in a box.)

In the HTML version of this document, this convention appears as brackets, rather than a box.

... A horizontal ellipsis in examples indicates one of the following possibilities:
  • Additional optional arguments in a statement have been omitted.
  • The preceding item or items can be repeated one or more times.
  • Additional parameters, values, or other information can be entered.
.
.
.
A vertical ellipsis indicates the omission of items from a code example or command format; the items are omitted because they are not important to the topic being discussed.
( ) In command format descriptions, parentheses indicate that you must enclose choices in parentheses if you specify more than one.
[ ] In command format descriptions, brackets indicate optional choices. You can choose one or more items or no items. Do not type the brackets on the command line. However, you must include the brackets in the syntax for OpenVMS directory specifications and for a substring specification in an assignment statement.
| In command format descriptions, vertical bars separate choices within brackets or braces. Within brackets, the choices are optional; within braces, at least one choice is required. Do not type the vertical bars on the command line.
{ } In command format descriptions, braces indicate required choices; you must choose at least one of the items listed. Do not type the braces on the command line.
bold text This typeface represents the introduction of a new term. It also represents the name of an argument, an attribute, or a reason.
italic text Italic text indicates important information, complete titles of manuals, or variables. Variables include information that varies in system output (Internal error number), in command lines (/PRODUCER= name), and in command parameters in text (where dd represents the predefined code for the device type).
UPPERCASE TEXT Uppercase text indicates a command, the name of a routine, the name of a file, or the abbreviation for a system privilege.
Monospace text Monospace type indicates code examples and interactive screen displays.

In the C programming language, monospace type in text identifies the following elements: keywords, the names of independently compiled external functions and files, syntax summaries, and references to variables or identifiers introduced in an example.

- A hyphen at the end of a command format description, command line, or code line indicates that the command or statement continues on the following line.
numbers All numbers in text are assumed to be decimal unless otherwise noted. Nondecimal radixes---binary, octal, or hexadecimal---are explicitly indicated.


Chapter 1
Introduction to OpenVMS Cluster System Management

"Cluster" technology was pioneered by Digital Equipment Corporation in 1983 with the VAXcluster system. The VAXcluster system was built using multiple standard VAX computing systems and the VMS operating system. The initial VAXcluster system offered the power and manageability of a centralized system and the flexibility of many physically distributed computing systems.

Through the years, the technology has evolved to support mixed-architecture cluster systems and the name changed to OpenVMS Cluster systems. Initially, OpenVMS Alpha and OpenVMS VAX systems were supported in a mixed-architecture OpenVMS Cluster system. In OpenVMS Version 8.2, cluster support was introduced for the OpenVMS Integrity server systems either in a single architecture cluster or in a mixed-architecture cluster with OpenVMS Alpha systems. HP continues to enhance and expand OpenVMS Cluster capabilities.

1.1 Overview

An OpenVMS Cluster system is a highly integrated organization of OpenVMS software, Alpha, VAX, or Integrity servers or a combination of Alpha and VAX or Alpha and Integrity servers, and storage devices that operate as a single system. The OpenVMS Cluster acts as a single virtual system, even though it is made up of many distributed systems. As members of an OpenVMS Cluster system, Alpha and VAX or Alpha and Integrity server systems can share processing resources, data storage, and queues under a single security and management domain, yet they can boot or shut down independently.

The distance between the computers in an OpenVMS Cluster system depends on the interconnects that you use. The computers can be located in one computer lab, on two floors of a building, between buildings on a campus, or on two different sites hundreds of miles apart.

An OpenVMS Cluster system, with computers located on two or more sites, is known as a multiple-site OpenVMS Cluster system. A multiple-site OpenVMS Cluster forms the basis of a disaster tolerant OpenVMS Cluster system. For more information about multiple site clusters, see the Guidelines for OpenVMS Cluster Configurations.

Disaster Tolerant Cluster Services for OpenVMS is an HP Services system management and software package for configuring and managing OpenVMS disaster tolerant clusters. For more information about Disaster Tolerant Cluster Services for OpenVMS, contact your HP Services representative or visit:

http://h71000.www7.hp.com/availability/index.html

1.1.1 Uses

OpenVMS Cluster systems are an ideal environment for developing high-availability applications, such as transaction processing systems, servers for network client or server applications, and data-sharing applications.

1.1.2 Benefits

Computers in an OpenVMS Cluster system interact to form a cooperative, distributed operating system and derive a number of benefits, as shown in the following table.
Benefit Description
Resource sharing OpenVMS Cluster software automatically synchronizes and load balances batch and print queues, storage devices, and other resources among all cluster members.
Flexibility Application programmers do not have to change their application code, and users do not have to know anything about the OpenVMS Cluster environment to take advantage of common resources.
High availability System designers can configure redundant hardware components to create highly available systems that eliminate or withstand single points of failure.
Nonstop processing The OpenVMS operating system, which runs on each node in an OpenVMS Cluster, facilitates dynamic adjustments to changes in the configuration.
Scalability Organizations can dynamically expand computing and storage resources as business needs grow or change without shutting down the system or applications running on the system.
Performance An OpenVMS Cluster system can provide high performance.
Management Rather than repeating the same system management operation on multiple OpenVMS systems, management tasks can be performed concurrently for one or more nodes.
Security Computers in an OpenVMS Cluster share a single security database that can be accessed by all nodes in a cluster.
Load balancing OpenVMS Cluster systems distribute work across cluster members based on the current load of each member.

1.2 Hardware Components

OpenVMS Cluster system configurations consist of hardware components from the following general groups:

References: Detailed OpenVMS Cluster configuration guidelines can be found in the OpenVMS Cluster Software Product Description (SPD) and in Guidelines for OpenVMS Cluster Configurations.

1.2.1 Computers

Up to 96 computers, ranging from desktop to mainframe systems, can be members of an OpenVMS Cluster system. Active members that run the OpenVMS Alpha or OpenVMS Integrity server operating system and participate fully in OpenVMS Cluster negotiations can include:

1.2.2 Physical Interconnects

An interconnect is a physical path that connects computers to other computers and to storage subsystems. OpenVMS Cluster systems support a variety of interconnects (also referred to as buses) so that members can communicate using the most appropriate and effective method possible:

Note

The CI, DSSI, and FDDI interconnects are supported on Alpha and VAX systems. Memory Channel and ATM interconnects are supported only on Alpha systems. For more information about these interconnects, see the previous version of the manual.

Table 1-1 Interconnect Support by OpenVMS Platform
Interconnect Platform Support Comments
IP: UDP Integrity servers and Alpha Supports Fast Ethernet/Gigabit Ethernet/10 Gb Ethernet. 10 Gb Ethernet is supported on Integrity servers only.
Fibre Channel Integrity servers and Alpha Shared storage only
SAS Integrity servers  
SCSI Integrity servers and Alpha Limited shared storage configurations only
LAN: Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gb Ethernet Integrity servers and Alpha 10 Gb Ethernet is supported on Integrity servers only
MEMORY CHANNEL Alpha Node-to-node communications only

For the most recent list of supported interconnects and speeds, see the HP OpenVMS Cluster Software Software Product Description (SPD 29.78.xx):

http://docs.hp.com/en/OpenVMS.html

1.2.3 OpenVMS Galaxy SMCI

In addition to the physical interconnects listed in Section 1.2.2, another type of interconnect, a shared memory CI (SMCI) for OpenVMS Galaxy instances, is available. SMCI supports cluster communications between Galaxy instances.

For more information about SMCI and Galaxy configurations, see the HP OpenVMS Alpha Partitioning and Galaxy Guide.

1.2.4 Storage Devices

A shared storage device is a disk or tape that is accessed by multiple computers in the cluster. Nodes access remote disks and tapes by means of the MSCP and TMSCP server software (described in Section 1.3.1).

Systems within an OpenVMS Cluster support a wide range of storage devices:

For the most recent list of supported storage devices, see the HP OpenVMS Version 8.4 Software Software Product Description (SPD 29.78.xx).

Also see the AlphaServer Supported Options Lists that can be found at the individual AlphaServer Web pages:

http://h18002.www1.hp.com/alphaserver/

See the Integrity servers Supported Options Lists that can be found at the individual Integrity server Web pages at:

http://h20341.www2.hp.com/integrity/cache/332341-0-0-0-121.html


Next Contents Index