Previous | Contents | Index |
To help you understand the design and implementation of an OpenVMS
Cluster system, this chapter describes its basic architecture.
2.1 OpenVMS Cluster System Architecture
Figure 2-1 illustrates the protocol layers within the OpenVMS Cluster system architecture, ranging from the communications mechanisms at the base of the figure to the users of the system at the top of the figure. These protocol layers include:
Figure 2-1 OpenVMS Cluster System Architecture
Not all interconnects are supported on all three architectures of OpenVMS. The CI, DSSI, and FDDI interconnects are supported on Alpha and VAX systems. Memory Channel and ATM interconnects are supported only on Alpha systems. |
This lowest level of the architecture provides connections, in the form of communication ports and physical paths, between devices. The port layer can contain any of the following interconnects:
Each interconnect is accessed by a port (also referred to as an
adapter) that connects to the processor node. For example, the Fibre
Channel interconnect is accessed by way of a Fibre Channel port.
2.1.2 SCS Layer
The SCS layer provides basic connection management and communications services in the form of datagrams, messages, and block transfers over each logical path. Table 2-1 describes these services.
Service | Delivery Guarantees | Usage |
---|---|---|
Datagrams | ||
Information units that fit in 1 packet or less. | Delivery of datagrams is not guaranteed. Datagrams can be lost, duplicated, or delivered out of order. |
Status and information messages whose loss is not critical.
Applications that have their own reliability protocols such as DECnet or TCP/IP. |
Messages | ||
Information units that fit in 1 packet or less. | Messages are guaranteed to be delivered and to arrive in order. Virtual circuit sequence numbers are used on the individual packets. | Disk read and write requests. |
Block data transfers | ||
Copying (that is, reading or writing) any contiguous data between a local process or system virtual address space and an address on another node. Individual transfers are limited to the lesser of 2 32-1 bytes, or the physical memory constraints of the host. Block data is a form of remote DMA transfer. | Delivery of block data is guaranteed. The sending and receiving ports and the port emulators cooperate in breaking the transfer into data packets and ensuring that all packets are correctly transmitted, received, and placed in the appropriate destination buffer. Block data transfers differ from messages in the size of the transfer. | Disk subsystems and disk servers to move data associated with disk read and write requests. Fast remastering of large lock trees. Transferring large ICC messages. |
The SCS layer is implemented as a combination of hardware and software,
or software only, depending upon the type of port. SCS manages
connections in an OpenVMS Cluster and multiplexes messages between
system applications over a common transport called a virtual
circuit. A virtual circuit exists between each pair of SCS
ports and a set of SCS connections that are multiplexed on that virtual
circuit.
2.1.3 System Applications (SYSAPs) Layer
The next higher layer in the OpenVMS Cluster architecture consists of the SYSAPs layer. This layer consists of multiple system applications that provide, for example, access to disks and tapes and cluster membership control. SYSAPs can include:
These components are described in detail later in this chapter.
2.1.4 Other Layered Components
A wide range of OpenVMS components layer on top of the OpenVMS Cluster system architecture, including:
These components, except for volume shadowing, are described in detail
later in this chapter. Volume Shadowing for OpenVMS is described in
Section 6.6.
2.2 OpenVMS Cluster Software Functions
The OpenVMS Cluster software components that implement OpenVMS Cluster
communication and resource-sharing functions always run on every
computer in the OpenVMS Cluster. If one computer fails, the OpenVMS
Cluster system continues operating, because the components still run on
the remaining computers.
2.2.1 Functions
The following table summarizes the OpenVMS Cluster communication and resource-sharing functions and the components that perform them.
Function | Performed By |
---|---|
Ensure that OpenVMS Cluster computers communicate with one another to enforce the rules of cluster membership | Connection manager |
Synchronize functions performed by other OpenVMS Cluster components, OpenVMS products, and other software components | Distributed lock manager |
Share disks and files | Distributed file system |
Make disks available to nodes that do not have direct access | MSCP server |
Make tapes available to nodes that do not have direct access | TMSCP server |
Make queues available | Distributed job controller |
The connection manager ensures that computers in an OpenVMS Cluster system communicate with one another to enforce the rules of cluster membership.
Computers in an OpenVMS Cluster system share various data and system
resources, such as access to disks and files. To achieve the
coordination that is necessary to maintain resource integrity, the
computers must maintain a clear record of cluster membership.
2.3.1 Connection Manager
The connection manager creates an OpenVMS Cluster when the first computer is booted and reconfigures the cluster when computers join or leave it during cluster state transitions. The overall responsibilities of the connection manager are to:
A primary purpose of the connection manager is to prevent cluster partitioning, a condition in which nodes in an existing OpenVMS Cluster configuration divide into two or more independent clusters.
Cluster partitioning can result in data file corruption because the
distributed lock manager cannot coordinate access to shared resources
for multiple OpenVMS Cluster systems. The connection manager prevents
cluster partitioning using a quorum algorithm.
2.3.3 Quorum Algorithm
The quorum algorithm is a mathematical method for determining if a
majority of OpenVMS Cluster members exist so that resources can be
shared across an OpenVMS Cluster system. Quorum is the
number of votes that must be present for the cluster to function.
Quorum is a dynamic value calculated by the connection manager to
prevent cluster partitioning. The connection manager allows processing
to occur only if a majority of the OpenVMS Cluster members are
functioning.
2.3.4 System Parameters
Two system parameters, VOTES and EXPECTED_VOTES, are key to the computations performed by the quorum algorithm. The following table describes these parameters.
Parameter | Description |
---|---|
VOTES |
Specifies a fixed number of votes that a computer contributes toward
quorum. The system manager can set the VOTES parameters on each
computer or allow the operating system to set it to the following
default values:
Each Integrity server or an Alpha computer with a nonzero value for the VOTES system parameter is considered a voting member. |
EXPECTED_VOTES | Specifies the sum of all VOTES held by OpenVMS Cluster members. The initial value is used to derive an estimate of the correct quorum value for the cluster. The system manager must set this parameter on each active Integrity server system or an Alpha system, including satellites in the cluster. |
The quorum algorithm operates as follows:
Step | Action | ||||||
---|---|---|---|---|---|---|---|
1 |
When nodes in the OpenVMS Cluster boot, the connection manager uses the
largest value for EXPECTED_VOTES of all systems present to derive an
estimated quorum value according to the following
formula:
Estimated quorum = (EXPECTED_VOTES + 2)/2 | Rounded down |
||||||
2 |
During a state transition (whenever a node enters or leaves the cluster
or when a quorum disk is recognized), the connection manager
dynamically computes the cluster quorum value to be the
maximum of the following:
Note: Quorum disks are discussed in Section 2.3.8. |
||||||
3 |
The connection manager compares the cluster votes value to the cluster
quorum value and determines what action to take based on the following
conditions:
|
Note: When a node leaves the OpenVMS Cluster system,
the connection manager does not decrease the cluster quorum value. In
fact, the connection manager never decreases the cluster quorum value;
the connection manager only increases the value, unless the REMOVE NODE
option was selected during shutdown. However, system managers can
decrease the value according to the instructions in Section 10.11.2.
2.3.6 Example
Consider a cluster consisting of three computers, each computer having
its VOTES parameter set to 1 and its EXPECTED_VOTES parameter set to 3.
The connection manager dynamically computes the cluster quorum value to
be 2 (that is, (3 + 2)/2). In this example, any two of the three
computers constitute a quorum and can run in the absence of the third
computer. No single computer can constitute a quorum by itself.
Therefore, there is no way the three OpenVMS Cluster computers can be
partitioned and run as two independent clusters.
2.3.7 Sub-Cluster Selection
To select the optimal sub-cluster and to continue after the communication failure occurs, two possible sub-clusters are compared as follows:
A cluster system manager can designate a disk a quorum disk. The quorum disk acts as a virtual cluster member whose purpose is to add one vote to the total cluster votes. By establishing a quorum disk, you can increase the availability of a two-node cluster; such configurations can maintain quorum in the event of failure of either the quorum disk or one node, and continue operating.
Note: Setting up a quorum disk is recommended only for OpenVMS Cluster configurations with two nodes. A quorum disk is neither necessary nor recommended for configurations with more than two nodes.
For example, assume an OpenVMS Cluster configuration with many satellites (that have no votes) and two nonsatellite systems (each having one vote) that downline load the satellites. Quorum is calculated as follows:
(EXPECTED VOTES + 2)/2 = (2 + 2)/2 = 2 |
Because there is no quorum disk, if either nonsatellite system departs from the cluster, only one vote remains and cluster quorum is lost. Activity will be blocked throughout the cluster until quorum is restored.
However, if the configuration includes a quorum disk (adding one vote to the total cluster votes), and the EXPECTED_VOTES parameter is set to 3 on each node, then quorum will still be 2 even if one of the nodes leaves the cluster. Quorum is calculated as follows:
(EXPECTED VOTES + 2)/2 = (3 + 2)/2 = 2 |
Rules: Each OpenVMS Cluster system can include only one quorum disk. At least one computer must have a direct (not served) connection to the quorum disk:
Reference: For more information about enabling a quorum disk, see Section 8.2.4. Section 8.3.2 describes removing a quorum disk.
2.3.9 Quorum Disk Watcher
To enable a computer as a quorum disk watcher, use one of the following
methods:
Method | Perform These Steps |
---|---|
Run the CLUSTER_CONFIG.COM procedure
(described in Chapter 8) |
Invoke the procedure and:
The procedure uses the information you provide to update the values of the DISK_QUORUM and QDSKVOTES system parameters. |
Respond YES when the OpenVMS installation procedure asks whether the
cluster will contain a quorum disk
(described in Chapter 4) |
During the installation procedure:
The procedure uses the information you provide to update the values of the DISK_QUORUM and QDSKVOTES system parameters. |
Edit the
MODPARAMS or AGEN$ files (described in Chapter 8) |
Edit the following parameters:
|
Hint: If only one quorum disk watcher has direct
access to the quorum disk, then remove the disk and give its votes to
the node.
2.3.10 Rules for Specifying Quorum
For the quorum disk's votes to be counted in the total cluster votes, the following conditions must be met:
Hint: By increasing the quorum disk's votes to one
less than the total votes from both systems (and by increasing the
value of the EXPECTED_VOTES system parameter by the same amount), you
can boot and run the cluster with only one node.
2.4 State Transitions
OpenVMS Cluster state transitions occur when a computer joins or leaves an OpenVMS Cluster system and when the cluster recognizes a quorum disk state change. The connection manager controls these events to ensure the preservation of data integrity throughout the cluster.
A state transition's duration and effect on users (applications) are
determined by the reason for the transition, the configuration, and the
applications in use.
2.4.1 Adding a Member
Every transition goes through one or more phases, depending on whether its cause is the addition of a new OpenVMS Cluster member or the failure of a current member.
Table 2-2 describes the phases of a transition caused by the addition of a new member.
Phase | Description |
---|---|
New member detection |
Early in its boot sequence, a computer seeking membership in an OpenVMS
Cluster system sends messages to current members asking to join the
cluster. The first cluster member that receives the membership request
acts as the new computer's advocate and proposes reconfiguring the
cluster to include the computer in the cluster. While the new computer
is booting, no applications are affected.
Note: The connection manager will not allow a computer to join the OpenVMS Cluster system if the node's value for EXPECTED_VOTES would readjust quorum higher than calculated votes to cause the OpenVMS Cluster to suspend activity. |
Reconfiguration | During a configuration change due to a computer being added to an OpenVMS Cluster, all current OpenVMS Cluster members must establish communications with the new computer. Once communications are established, the new computer is admitted to the cluster. In some cases, the lock database is rebuilt. |
Previous | Next | Contents | Index |