HP OpenVMS Systems Documentation |
Guidelines for OpenVMS Cluster Configurations
11.2.5 Summary: Single Versus Multiple System DisksUse the information in Table 11-3 to determine whether you need a system disk for the entire OpenVMS Cluster or multiple system disks.
11.3 OpenVMS Cluster Environment StrategiesDepending on your processing needs, you can prepare either a common environment, in which all environment files are shared clusterwide, or a multiple environment, in which some files are shared clusterwide and others are accessible only by certain OpenVMS Cluster members. The following are the most frequently used and manipulated OpenVMS Cluster environment files: SYS$SYSTEM:SYSUAF.DAT
Reference: For more information about managing these
files, see OpenVMS Cluster Systems.
A common OpenVMS Cluster environment is an operating environment that is identical on all nodes in the OpenVMS Cluster. A common environment is easier to manage than multiple environments because you use a common version of each system file. The environment is set up so that:
The simplest and most inexpensive environment strategy is to have one system disk for the OpenVMS Cluster with all environment files on the same disk, as shown in Figure 11-1. The benefits of this strategy are:
11.3.2 Putting Environment Files on a Separate, Common DiskFor an OpenVMS Cluster in which every node share the same system disk and environment, most common environment files are located in the SYS$SYSTEM directory. However, you may want to move environment files to a separate disk so that you can improve OpenVMS Cluster performance. Because the environment files typically experience 80% of the system-disk activity, putting them on a separate disk decreases activity on the system disk. Figure 11-3 shows an example of a separate, common disk. If you move environment files such as SYSUAF.DAT to a separate, common disk, SYSUAF.DAT will not be located in its default location of SYS$SYSTEM:SYSUAF.DAT.
Reference: See OpenVMS Cluster Systems for procedures to ensure
that every node in the OpenVMS Cluster can access SYSUAF.DAT in its new
location.
Multiple environments can vary from node to node. You can set up an individual node or a subset of nodes to:
Figure 11-4 shows an example of a multiple environment. Figure 11-4 Multiple-Environment OpenVMS Cluster In Figure 11-4, the multiple-environment OpenVMS Cluster consists of two system disks: one for VAX nodes and one for Alpha nodes. The common disk contains environment files for each node or group of nodes. Although many OpenVMS Cluster system managers prefer the simplicity of a single (common) environment, duplicating environment files is necessary for creating multiple environments that do not share resources across every node. Each environment can be tailored to the types of tasks users perform and the resources they use, and the configuration can have many different applications installed.
Each of the four DSSI nodes has its own page and swap disk, offloading
the Alpha and VAX system disks on the DSSI interconnect from page and
swap activity. All of the disks are shadowed across the DSSI
interconnects, which protects the disks if a failure occurs.
This section describes additional multiple-environment strategies, such
as using multiple SYSUAF.DAT files and multiple queue managers.
Most OpenVMS Clusters are managed with one user authorization (SYSUAF.DAT) file, but you can use multiple user authorization files to limit access for some users to certain systems. In this scenario, users who need access to all systems also need multiple passwords. Be careful about security with multiple SYSUAF.DAT files. The OpenVMS VAX and OpenVMS Alpha operating systems do not support multiple security domains. Reference: See OpenVMS Cluster Systems for the list of fields that need to be the same for a single security domain, including SYSUAF.DAT entries.
Because Alpha systems require higher process quotas, system managers
often respond by creating multiple SYSUAF.DAT files. This is not an
optimal solution. Multiple SYSUAF.DAT files are intended only to vary
environments from node to node, not to increase process quotas. To
increase process quotas, HP recommends that you have one SYSUAF.DAT
file and that you use system parameters to override process quotas in
the SYSUAF.DAT file with system parameters to control resources for
your Alpha systems.
If the number of batch and print transactions on your OpenVMS Cluster is causing congestion, you can implement multiple queue managers to distribute the batch and print loads between nodes. Every OpenVMS Cluster has only one QMAN$MASTER.DAT file. Multiple queue managers are defined through multiple *.QMAN$QUEUES and *.QMAN$JOURNAL files. Place each pair of queue manager files on different disks. If the QMAN$MASTER.DAT file has contention problems, place it on a solid-state disk to increase the number of batch and print transactions your OpenVMS Cluster can process. For example, you can create separate queue managers for batch queues and print queues.
Reference: See HP OpenVMS System Manager's Manual, Volume 1: Essentials for examples and commands
to implement multiple queue managers.
OpenVMS Cluster systems use a quorum algorithm to ensure synchronized access to storage. The quorum algorithm is a mathematical method for determining whether a majority of OpenVMS Cluster members exists so that they can "vote" on how resources can be shared across an OpenVMS Cluster system. The connection manager, which calculates quorum as a dynamic value, allows processing to occur only if a majority of the OpenVMS Cluster members are functioning. Quorum votes are contributed by:
Each OpenVMS Cluster system can include only one quorum disk. The disk cannot be a member of a shadow set, but it can be the system disk.
The connection manager knows about the quorum disk from "quorum
disk watchers," which are any systems that have a direct, active
connection to the quorum disk.
At least two systems should have a direct connection to the quorum disk. This ensures that the quorum disk votes are accessible if one of the systems fails. When you consider quorum strategies, you must decide under what failure circumstances you want the OpenVMS Cluster to continue. Table 11-4 describes four options from which to choose.
1These strategies are mutually exclusive; choose only one.
Reference: For more information about quorum disk
management, see OpenVMS Cluster Systems.
OpenVMS Cluster state transitions occur when a system joins or leaves an OpenVMS Cluster system and when the OpenVMS Cluster recognizes a quorum-disk state change. The connection manager handles these events to ensure the preservation of data integrity throughout the OpenVMS Cluster. State transitions should be a concern only if systems are joining or leaving an OpenVMS Cluster system frequently enough to cause disruption. A state transition's duration and effect on users and applications is determined by the reason for the transition, the configuration, and the applications in use. By managing transitions effectively, system managers can control:
11.6.1 Dealing with State TransitionsThe following guidelines describe effective ways of dealing with transitions so that you can minimize the actual transition time as well as the side effects after the transition.
Reference: For more detailed information about OpenVMS
Cluster transitions and their phases, system parameters, quorum
management, see OpenVMS Cluster Systems.
HP provides two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems. Warranted support means that HP has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations. Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments. Migration support means that HP has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by HP. However, in exceptional cases, HP may request that you move to a warranted configuration as part of answering the problem.
Table 11-6 shows the level of support provided for all possible version pairings.
In a mixed-version cluster, you must install remedial kits on earlier
versions of OpenVMS. For a complete list of required remedial kits, see
the OpenVMS Alpha Version 7.3-1 Release Notes.
OpenVMS Alpha and OpenVMS VAX systems can work together in the same
OpenVMS Cluster to provide both flexibility and migration capability.
You can add Alpha processing power to an existing VAXcluster, enabling
you to utilize applications that are system specific or hardware
specific.
OpenVMS Alpha Version 7.1 (and higher) and OpenVMS VAX Version 7.1 (and higher) enable VAX boot nodes to provide boot service to Alpha satellites and Alpha boot nodes to provide boot service to VAX satellites. This support, called cross-architecture booting, increases configuration flexibility and higher availability of boot servers for satellites. Two configuration scenarios make cross-architecture booting desirable:
11.8.2 RestrictionsYou cannot perform OpenVMS operating system and layered product installations and upgrades across architectures. For example, you must install and upgrade OpenVMS Alpha software using an Alpha system. When you configure OpenVMS Cluster systems that take advantage of cross-architecture booting, ensure that at least one system from each architecture is configured with a disk that can be used for installations and upgrades. System disks can contain only a single version of the OpenVMS operating system and are architecture-specific. For example, OpenVMS VAX Version 7.3 cannot coexist on a system disk with OpenVMS Alpha Version 7.3.
|