Previous | Contents | Index |
Depending on the level of network security required, you might also want to consider how other security mechanisms, such as protocol encryption and decryption, can promote additional security protection across the cluster.
Reference: See the HP OpenVMS Guide to System Security.
5.8 Coordinating System Files
Follow these guidelines to coordinate system files:
IF you are setting up... | THEN follow the procedures in... |
---|---|
A common-environment OpenVMS Cluster that consists of newly installed systems | HP OpenVMS System Manager's Manual to build these files. Because the files on new operating systems are empty except for the Digital-supplied accounts, very little coordination is necessary. |
An OpenVMS Cluster that will combine one or more computers that have been running with computer-specific files | Appendix B to create common copies of the files from the computer-specific files. |
5.8.1 Procedure
In a common-environment cluster with one common system disk, you use a
common copy of each system file and place the files in the
SYS$COMMON:[SYSEXE] directory on the common system disk or on a disk
that is mounted by all cluster nodes. No further action is required.
To prepare a common user environment for an OpenVMS Cluster system that includes more than one common OpenVMS Integrity server system disk or more than one common OpenVMS Alpha system disk, you must coordinate the system files on those disks.
Rules: The following rules apply to the procedures described in Table 5-4:
Step | Action |
---|---|
1 | Decide where to locate the SYSUAF.DAT and NETPROXY.DAT files. In a cluster with multiple system disks, system management is much easier if the common system files are located on a single disk that is not a system disk. |
2 | Copy SYS$SYSTEM:SYSUAF.DAT and SYS$SYSTEM:NETPROXY.DAT to a location other than the system disk. |
3 | Copy SYS$SYSTEM:RIGHTSLIST.DAT and SYS$SYSTEM:VMSMAIL_PROFILE.DATA to the same directory in which SYSUAF.DAT and NETPROXY.DAT reside. |
4 |
Edit the file SYS$COMMON:[SYSMGR]SYLOGICALS.COM
on each system disk and define logical names that specify the
location of the cluster common files.
Example: If the files will be located on $1$DGA16,
define logical names as follows:
|
5 |
To ensure that the system disks are mounted correctly with each reboot,
follow these steps:
$ @SYS$SYSDEVICE:[VMS$COMMON.SYSMGR]CLU_MOUNT_DISK.COM $1$DGA16: volume-label |
6 |
When you are ready to start the queuing system, be sure you have moved
the queue and journal files to a cluster-available disk. Any cluster
common disk is a good choice if the disk has sufficient space.
Enter the following command:
|
In OpenVMS Cluster systems on the LAN and in mixed-interconnect clusters, you must also coordinate the SYS$MANAGER:NETNODE_UPDATE.COM file, which is a file that contains all essential network configuration data for satellites. NETNODE_UPDATE.COM is updated each time you add or remove a satellite or change its Ethernet or FDDI hardware address. This file is discussed more thoroughly in Section 10.4.2.
In OpenVMS Cluster systems configured with DECnet for OpenVMS software,
you must also coordinate NETNODE_REMOTE.DAT, which is the remote node
network database.
5.9 System Time on the Cluster
When a computer joins the cluster, the cluster attempts to set the joining computer's system time to the current time on the cluster. Although it is likely that the system time will be similar on each cluster computer, there is no assurance that the time will be set. Also, no attempt is made to ensure that the system times remain similar throughout the cluster. (For example, there is no protection against different computers having different clock rates.)
An OpenVMS Cluster system spanning multiple time zones must use a
single, clusterwide common time on all nodes. Use of a common time
ensures timestamp consistency (for example, between applications,
file-system instances) across the OpenVMS Cluster members.
5.9.1 Setting System Time
Use the SYSMAN command CONFIGURATION SET TIME to set the time across the cluster. This command issues warnings if the time on all nodes cannot be set within certain limits. Refer to the HP OpenVMS System Manager's Manual for information about the SET TIME command.
One of the most important features of OpenVMS Cluster systems is the ability to provide access to devices and files across multiple systems.
In a traditional computing environment, a single system is directly attached to its storage subsystems. Even though the system may be networked with other systems, when the system is shut down, no other system on the network has access to its disks or any other devices attached to the system.
In an OpenVMS Cluster system, disks and tapes can be made accessible to
one or more members. So, if one computer shuts down, the remaining
computers still have access to the devices.
6.1 Data File Sharing
Cluster-accessible devices play a key role in OpenVMS Clusters because, when you place data files or applications on a cluster-accessible device, computers can share a single copy of each common file. Data sharing is possible between Integrity server systems, between:
In addition, multiple systems that are permitted in the same OpenVMS Cluster system can write to a shared disk file simultaneously. It is this ability that allows multiple systems in an OpenVMS Cluster to share a single system disk; multiple systems can boot from the same system disk and share operating system files and utilities to save disk space and simplify system management.
Note: Tapes do not allow multiple systems to access a
tape file simultaneously.
6.1.1 Access Methods
Depending on your business needs, you may want to restrict access to a particular device to the users on the computer that are directly connected (local) to the device. Alternatively, you may decide to set up a disk or tape as a served device so that any user on any OpenVMS Cluster computer can allocate and use it.
Table 6-1 describes the various access methods.
Method | Device Access | Comments | Illustrated in |
---|---|---|---|
Local | Restricted to the computer that is directly connected to the device. | Can be set up to be served to other systems. | Figure 6-3 |
Dual ported | Using either of two physical ports, each of which can be connected to separate controllers. A dual-ported disk can survive the failure of a single controller by failing over to the other controller. | As long as one of the controllers is available, the device is accessible by all systems in the cluster. | Figure 6-1 |
Shared | Through a shared interconnect to multiple systems. | Can be set up to be served to systems that are not on the shared interconnect. | Figure 6-2 |
Served | Through a computer that has the MSCP or TMSCP server software loaded. | MSCP and TMSCP serving are discussed in Section 6.3. | Figures 6-2 and 6-3 |
Dual pathed | Possible through more than one path. | If one path fails, the device is accessed over the other path. Requires the use of allocation classes (described in Section 6.2.1 to provide a unique, path-independent name.) | Figure 6-2 |
Note: The path to an individual disk may appear to be local from some nodes and served from others. |
When storage subsystems are connected directly to a specific system, the availability of the subsystem is lower due to the reliance on the host system. To increase the availability of these configurations, OpenVMS Cluster systems support dual porting, dual pathing, and MSCP and TMSCP serving.
Figure 6-1 shows a dual-ported configuration, in which the disks have independent connections to two separate computers. As long as one of the computers is available, the disk is accessible by the other systems in the cluster.
Figure 6-1 Dual-Ported Disks
Note: Disks can be shadowed using Volume Shadowing for OpenVMS. The automatic recovery from system failure provided by dual porting and shadowing is transparent to users and does not require any operator intervention.
Figure 6-2 shows a dual-pathed FC and Ethernet configuration. The disk devices, accessible through a shared SCSI interconnect, are MSCP served to the client nodes on the LAN.
Rule: A dual-pathed DSA disk cannot be used as a system disk for a directly connected CPU. Because a device can be on line to one controller at a time, only one of the server nodes can use its local connection to the device. The second server node accesses the device through the MSCP (or the TMSCP server). If the computer that is currently serving the device fails, the other computer detects the failure and fails the device over to its local connection. The device thereby remains available to the cluster.
Dual-pathed disks or tapes can be failed over between two computers that serve the devices to the cluster, provided that:
Caution: Failure to observe these requirements can endanger data integrity.
You can set up HSG or HSV storage devices to be dual ported between two storage subsystems, as shown in Figure 6-3.
Figure 6-3 Configuration with Cluster-Accessible Devices
By design, HSG and HSV disks and tapes are directly accessible by all OpenVMS Cluster nodes that are connected to the same star coupler. Therefore, if the devices are dual ported, they are automatically dual pathed. Computers connected by FC can access a dual-ported HSG or HSV device by way of a path through either subsystem connected to the device. If one subsystem fails, access fails over to the other subsystem.
Note: To control the path that is taken during failover, you can specify a preferred path to force access to disks over a specific path. Section 6.1.3 describes the preferred-path capability.
See Chapter 6 of Guidelines for OpenVMS Cluster
Configurations, Configuring Multiple Paths to SCSI and Fibre
Channel Storage for more information on FC storage devices.
6.1.3 Specifying a Preferred Path
The operating system supports specifying a preferred path for DSA disks, including RA series disks and disks that are accessed through the MSCP server. (This function is not available for tapes.) If a preferred path is specified for a disk, the MSCP disk class drivers use that path:
In addition, you can initiate failover of a mounted disk to force the disk to the preferred path or to use load-balancing information for disks accessed by MSCP servers.
You can specify the preferred path by using the SET PREFERRED_PATH DCL command or by using the $QIO function (IO$_SETPRFPATH), with the P1 parameter containing the address of a counted ASCII string (.ASCIC). This string is the node name of the HSG or HSV, or of the OpenVMS system that is to be the preferred path.
Rule: The node name must match an existing node running the MSCP server that is known to the local node.
Reference: For more information about the use of the SET PREFERRED_PATH DCL command, refer to the HP OpenVMS DCL Dictionary: N--Z.
For more information about the use of the IO$_SETPRFPATH function,
refer to the HP OpenVMS I/O User's Reference Manual.
6.2 Naming OpenVMS Cluster Storage Devices
The naming convention of Fibre Channel devices is documented in the Fibre Channel chapter of Guidelines for OpenVMS Cluster Configurations. The naming of all other devices is described in this section. |
In the OpenVMS operating system, a device name takes the form of ddcu, where:
For SCSI, the controller letter is assigned by OpenVMS, based on the system configuration. The unit number is determined by the SCSI bus ID and the logical unit number (LUN) of the device.
Because device names must be unique in an OpenVMS Cluster, and because every cluster member must use the same name for the same device, OpenVMS adds a prefix to the device name, as follows:
node$ddcu |
$allocation-class$ddcu |
The purpose of allocation classes is to provide unique and unchanging device names. The device name is used by the OpenVMS Cluster distributed lock manager in conjunction with OpenVMS facilities (such as RMS and the XQP) to uniquely identify shared devices, files, and data.
Allocation classes are required in OpenVMS Cluster configurations where storage devices are accessible through multiple paths. Without the use of allocation classes, device names that relied on node names would change as access paths to the devices change.
Prior to OpenVMS Version 7.1, only one type of allocation class existed, which was node based. It was named allocation class. OpenVMS Version 7.1 introduced a second type, port allocation class, which is specific to a single interconnect and is assigned to all devices attached to that interconnect. Port allocation classes were originally designed for naming SCSI devices. Their use has been expanded to include additional devices types: floppy disks, PCI RAID controller disks, and IDE disks.
The use of port allocation classes is optional. They are designed to solve the device-naming and configuration conflicts that can occur in certain configurations, as described in Section 6.2.3.
To differentiate between the earlier node-based allocation class and the newer port allocation class, the term node allocation class was assigned to the earlier type.
Prior to OpenVMS Version 7.2, all nodes with direct access to the same multipathed device were required to use the same nonzero value for the node allocation class. OpenVMS Version 7.2 introduced the MSCP_SERVE_ALL system parameter, which can be set to serve all disks or to exclude those whose node allocation class differs.
If SCSI devices are connected to multiple hosts and if port allocation classes are not used, then all nodes with direct access to the same multipathed devices must use the same nonzero node allocation class. |
Multipathed MSCP controllers also have an allocation class parameter,
which is set to match that of the connected nodes. (If the allocation
class does not match, the devices attached to the nodes cannot be
served.)
6.2.2 Specifying Node Allocation Classes
A node allocation class can be assigned to computers, HSG or HSV controllers. The node allocation class is a numeric value from 1 to 255 that is assigned by the system manager.
The default node allocation class value is 0. A node allocation class value of 0 is appropriate only when serving a local, single-pathed disk. If a node allocation class of 0 is assigned, served devices are named using the node-name$device-name syntax, that is, the device name prefix reverts to the node name.
The following rules apply to specifying node allocation class values:
System managers provide node allocation classes separately for disks and tapes. The node allocation class for disks and the node allocation class for tapes can be different.
The node allocation class names are constructed as follows:
$disk-allocation-class$device-name $tape-allocation-class$device-name |
Caution: Failure to set node allocation class values and device unit numbers correctly can endanger data integrity and cause locking conflicts that suspend normal cluster operations.
Figure 6-5 includes satellite nodes that access devices $1$DUA17 and $1$MUA12 through the JUPITR and NEPTUN computers. In this configuration, the computers JUPITR and NEPTUN require node allocation classes so that the satellite nodes are able to use consistent device names regardless of the access path to the devices.
Note: System management is usually simplified by using the same node allocation class value for all servers, HSG and HSV subsystems; you can arbitrarily choose a number between 1 and 255. Note, however, that to change a node allocation class value, you must shut down and reboot the entire cluster (described in Section 8.6). If you use a common node allocation class for computers and controllers, ensure that all devices have unique unit numbers.
Previous | Next | Contents | Index |