HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems


Previous Contents Index

5.4.4 Alias Collisions Involving Clusterwide Logical Name Tables

Alias collisions involving clusterwide logical name tables are treated differently from alias collisions of other types of logical name tables. Table 5-2 describes the types of collisions and their outcomes.

Table 5-2 Alias Collisions and Outcomes
Collision Type Outcome
Creating a local table with same name and access mode as an existing clusterwide table New local table is not created. The condition value SS$_NORMAL is returned, which means that the service completed successfully but the logical name table already exists. The existing clusterwide table and its names on all nodes remain in effect.
Creating a clusterwide table with same name and access mode as an existing local table New clusterwide table is created. The condition value SS$_LNMCREATED is returned, which means that the logical name table was created. The local table and its names are deleted. If the clusterwide table was created with the DCL command DEFINE, a message is displayed:

DCL-I-TABSUPER, previous table table_name has been superseded

If the clusterwide table was created with the $CRELNT system service, $CRELNT returns the condition value: SS$_SUPERSEDE .

Creating a clusterwide table with same name and access mode as an existing clusterwide table New clusterwide table is not created. The condition value SS$_NORMAL is returned, which means that the service completed successfully but the logical name table already exists. The existing table and all its names remain in effect, regardless of the setting of the $CRELNT system service's CREATE-IF attribute. This prevents surprise implicit deletions of existing table names from other nodes.

5.4.5 Creating Clusterwide Logical Names

To create a clusterwide logical name, you must have write (W) access to the table in which the logical name is to be entered, or SYSNAM privilege if you are creating clusterwide logical names only in LNM$SYSCLUSTER. Unless you specify an access mode (user, supervisor, and so on), the access mode of the logical name you create defaults to the access mode from which the name was created. If you created the name with a DCL command, the access mode defaults to supervisor mode. If you created the name with a program, the access mode typically defaults to user mode.

When you create a clusterwide logical name, you must include the name of a clusterwide logical name table in the definition of the logical name. You can create clusterwide logical names by using DCL commands or with the $CRELNM system service.

The following example shows how to create a clusterwide logical name in the default clusterwide logical name table, LNM$CLUSTER_TABLE, using the DEFINE command:


$ DEFINE/TABLE=LNM$CLUSTER_TABLE logical-name equivalence-string

To create clusterwide logical names that will reside in a clusterwide logical name table you created, you define the new clusterwide logical name with the DEFINE command, specifying your new clusterwide table's name with the /TABLE qualifier, as shown in the following example:


$ DEFINE/TABLE=new-clusterwide-logical-name-table logical-name - 
_$ equivalence-string

Note

If you attempt to create a new clusterwide logical name with the same access mode and identical equivalence names and attributes as an existing clusterwide logical name, the existing name is not deleted, and no messages are sent to remote nodes. This behavior differs from similar attempts for other types of logical names, which delete the existing name and create the new one. For clusterwide logical names, this difference is a performance enhancement.

The condition value SS$_NORMAL is returned. The service completed successfully, but the new logical name was not created.

5.4.6 Management Guidelines

When using clusterwide logical names, observe the following guidelines:

  1. Do not use certain logical names clusterwide.
    The following logical names are not valid for clusterwide use:
    • Mailbox names, because mailbox devices are local to a node.
    • SYS$NODE and SYS$NODE_FULLNAME must be in LNM$SYSTEM_TABLE and are node specific.
    • LMF$LICENSE_TABLE.
  2. Do not redefine LNM$SYSTEM.
    LNM$SYSTEM is now defined as LNM$SYSTEM_TABLE, LNM$SYSCLUSTER_TABLE. Do not reverse the order of these two tables. If you do, then any names created using the /SYSTEM qualifier or in LNM$SYSTEM would go in LNM$SYSCLUSTER_TABLE and be clusterwide. Various system failures would result. For example, the MOUNT/SYSTEM command would attempt to create a clusterwide logical name for a mounted volume, which would result in an error.
  3. Keep LNM$SYSTEM contents in LNM$SYSTEM.
    Do not merge the logical names in LNM$SYSTEM into LNM$SYSCLUSTER. Many system logical names in LNM$SYSTEM contain system roots and either node-specific devices, or node-specific directories, or both.
  4. Adopt naming conventions for logical names used at your site.
    To avoid confusion and name conflicts, develop one naming convention for system-specific logical names and another for clusterwide logical names.
  5. Avoid using the dollar sign ($) in your own site's logical names, because OpenVMS software uses it in its names.
  6. Be aware that clusterwide logical name operations will stall when the clusterwide logical name database is not consistent.
    This can occur during system initialization when the system's clusterwide logical name database is not completely initialized. It can also occur when the cluster server process has not finished updating the clusterwide logical name database, or during resynchronization after nodes enter or leave the cluster. As soon as consistency is reestablished, the processing of clusterwide logical name operations resumes.

5.4.7 Using Clusterwide Logical Names in Applications

The $TRNLNM system service and the $GETSYI system service provide attributes that are specific to clusterwide logical names. This section describes those attributes. It also describes the use of $CRELNT as it pertains to creating a clusterwide table. For more information about using logical names in applications, refer to the HP OpenVMS Programming Concepts Manual.

5.4.7.1 Clusterwide Attributes for $TRNLNM System Service

Two clusterwide attributes are available in the $TRNLNM system service:

  • LNM$V_CLUSTERWIDE
  • LNM$M_INTERLOCKED

LNM$V_CLUSTERWIDE is an output attribute to be returned in the itemlist if you asked for the LNM$_ATTRIBUTES item for a logical name that is clusterwide.

LNM$M_INTERLOCKED is an attr argument bit that can be set to ensure that any clusterwide logical name modifications in progress are completed before the name is translated. LNM$M_INTERLOCKED is not set by default. If your application requires translation using the most recent definition of a clusterwide logical name, use this attribute to ensure that the translation is stalled until all pending modifications have been made.

On a single system, when one process modifies the shareable part of the logical name database, the change is visible immediately to other processes on that node. Moreover, while the modification is in progress, no other process can translate or modify shareable logical names.

In contrast, when one process modifies the clusterwide logical name database, the change is visible immediately on that node, but it takes a short time for the change to be propagated to other nodes. By default, translations of clusterwide logical names are not stalled. Therefore, it is possible for processes on different nodes to translate a logical name and get different equivalence names when modifications are in progress.

The use of LNM$M_INTERLOCKED guarantees that your application will receive the most recent definition of a clusterwide logical name.

5.4.7.2 Clusterwide Attribute for $GETSYI System Service

The clusterwide attribute, SYI$_CWLOGICALS, has been added to the $GETSYI system service. When you specify SYI$_CWLOGICALS, $GETSYI returns the value 1 if the clusterwide logical name database has been initialized on the CPU, or the value 0 if it has not been initialized. Because this number is a Boolean value (1 or 0), the buffer length field in the item descriptor should specify 1 (byte). On a nonclustered system, the value of SYI$_CWLOGICALS is always 0.

5.4.7.3 Creating Clusterwide Tables with the $CRELNT System Service

When creating a clusterwide table, the $CRELNT requester must supply a table name. OpenVMS does not supply a default name for clusterwide tables because the use of default names enables a process without the SYSPRV privilege to create a shareable table.

5.4.8 Defining and Accessing Clusterwide Logical Names

Initializing the clusterwide logical name database on a booting node requires sending a message to another node and having its CLUSTER_SERVER process reply with one or messages containing a description of the database. The CLUSTER_SERVER process on the booting node requests system services to create the equivalent names and tables. How long this initialization takes varies with conditions such as the size of the clusterwide logical name database, the speed of the cluster interconnect, and the responsiveness of the CLUSTER_SERVER process on the responding node.

Until a booting node's copy of the clusterwide logical name database is consistent with the logical name databases of the rest of the cluster, any attempt on the booting node to create or delete clusterwide names or tables is stalled transparently. Because translations are not stalled by default, any attempt to translate a clusterwide name before the database is consistent may fail or succeed, depending on timing. To stall a translation until the database is consistent, specify the F$TRNLNM CASE argument as INTERLOCKED.

5.4.8.1 Defining Clusterwide Logical Names in SYSTARTUP_VMS.COM

In general, system managers edit the SYLOGICALS.COM command procedure to define site-specific logical names that take effect at system startup. However, HP recommends that, if possible, clusterwide logical names be defined in the SYSTARTUP_VMS.COM command procedure instead with the exception of those logical names discussed in Section 5.4.8.2. The reason for defining clusterwide logical names in SYSTARTUP_VMS.COM is that SYSTARTUP_VMS.COM is run at a much later stage in the booting process than SYLOGICALS.COM.

OpenVMS startup is single streamed and synchronous except for actions taken by created processes, such as the CLUSTER_SERVER process. Although the CLUSTER_SERVER process is created very early in startup, it is possible that when SYLOGICALS.COM is executed, the booting node's copy of the clusterwide logical name database has not been fully initialized. In such a case, a clusterwide definition in SYLOGICALS.COM would stall startup and increase the time it takes for the system to become operational.

OpenVMS will ensure that the clusterwide database has been initialized before SYSTARTUP_VMS.COM is executed.

5.4.8.2 Defining Certain Logical Names in SYLOGICALS.COM

To be effective, certain logical names, such as LMF$LICENSE, NET$PROXY, and VMS$OBJECTS must be defined earlier in startup than when SYSTARTUP_VMS.COM is invoked. Most such names are defined in SYLOGICALS.COM, with the exception of VMS$OBJECTS, which is defined in SYSECURITY.COM, and any names defined in SYCONFIG.COM.

Although HP recommends defining clusterwide logical names in SYSTARTUP_VMS.COM, to define these names to be clusterwide, you must do so in SYLOGICALS.COM or SYSECURITY.COM. Note that doing this may increase startup time.

Alternatively, you can take the traditional approach and define these names as systemwide logical names with the same definition on every node.

5.4.8.3 Using Conditional Definitions for Startup Command Procedures

For clusterwide definitions in any startup command procedure that is common to all cluster nodes, HP recommends that you use a conditional definition. For example:


$ IF F$TRNLNM("CLUSTER_APPS") .EQS. "" THEN - 
_$ DEFINE/TABLE=LNM$SYSCLUSTER/EXEC CLUSTER_APPS - 
_$ $1$DKA500:[COMMON_APPS] 

A conditional definition can prevent unpleasant surprises. For example, suppose a system manager redefines a name that is also defined in SYSTARTUP_VMS.COM but does not edit SYSTARTUP_VMS.COM because the new definition is temporary. If a new node joins the cluster, the new node would initially receive the new definition. However, when the new node executes SYSTARTUP_VMS.COM, it will cause all the nodes in the cluster, including itself, to revert to the original value.

If you include a conditional definition in SYLOGICALS.COM or SYSECURITY.COM, specify the F$TRNLNM CASE argument as INTERLOCKED to ensure that clusterwide logical names have been fully initialized before the translation completes. An example of a conditional definition with the argument specified follows:


 $ IF F$TRNLNM("CLUSTER_APPS",,,,"INTERLOCKED") .EQS. "" THEN - 
 _$ DEFINE/TABLE=LNM$SYSCLUSTER/EXEC CLUSTER_APPS - 
 _$ $1$DKA500:[COMMON_APPS] 
 

Note

F$GETSYI ("CWLOGICALS") always returns a value of FALSE on a noncluster system. Procedures that are designed to run in both clustered and nonclustered environments should first determine whether they are in a cluster and, if so, then determine whether clusterwide logical names are initialized.

5.4.9 Displaying Clusterwide Logical Names

The /CLUSTER qualifier was added to the SHOW LOGICAL DCL command in OpenVMS Version 8.2. When the SHOW LOGICAL/CLUSTER command is specified, all clusterwide logical names are displayed, as shown in the following example:


$ SHOW LOGICAL/CLUSTER 
 
(LNM$CLUSTER_TABLE) 
 
(LNM$SYSCLUSTER_TABLE) 
 
  "MSCPMOUNT$_AMALFI_LAST" = "2005-10-10 14:25:03.74" 
  "MSCPMOUNT$_AMALFI_LOGINTIM" = " 8-OCT-2005 01:02:22.17" 
  "MSCPMOUNT$_AMALFI_NEXT" = "2005-10-10 14:40:03.74" 
  "MSCPMOUNT$_AMALFI_PID" = "26200462" 
  . 
  . 
  . 
  "MSCPMOUNT$_ETNA_LAST" = "2005-10-10 14:25:18.78" 
  "MSCPMOUNT$_ETNA_LOGINTIM" = " 8-OCT-2005 07:44:37.89" 
  "MSCPMOUNT$_ETNA_NEXT" = "2005-10-10 14:40:18.79" 
  "MSCPMOUNT$_ETNA_PID" = "26A0044E" 
  . 
  . 
  . 
  "MSCPMOUNT$_MILAN_LAST" = "2005-10-10 14:25:19.64" 
  "MSCPMOUNT$_MILAN_LOGINTIM" = " 8-OCT-2005 07:22:08.05" 
  "MSCPMOUNT$_MILAN_NEXT" = "2005-10-10 14:40:19.64" 
  "MSCPMOUNT$_MILAN_PID" = "26600458" 
  . 
  . 
  . 
  "MSCPMOUNT$_ORVIET_LAST" = "2005-10-10 14:29:25.94" 
  "MSCPMOUNT$_ORVIET_LOGINTIM" = "30-SEP-2005 09:38:27.38" 
  "MSCPMOUNT$_ORVIET_NEXT" = "2005-10-10 14:44:26.61" 
  "MSCPMOUNT$_ORVIET_PID" = "25600139" 
  . 
  . 
  . 
  "MSCPMOUNT$_TURIN_LAST" = "2005-10-10 14:39:59.59" 
  "MSCPMOUNT$_TURIN_LOGINTIM" = "10-OCT-2005 09:22:48.46" 
  "MSCPMOUNT$_TURIN_NEXT" = "2005-10-10 14:54:59.59"            
  "MSCPMOUNT$_TURIN_PID" = "2760012C" 
  "PREPOPULATE_NEXT_STREAM$IGNORE_BUILD_MASTER_944" = "1" 
                                                                    
(CLU$ICC_ORBS_AMALFI) 
 
  "ICC$ORB_ICC$PID_26200450_U" = "T" 
      = "M\.v....k...............æ...æ...þ...þ.....AMALFI::ICC$PID_26200450_U....." 
  "ICC$ORB_REG$SERVER_E" = "T" 
      = "p.O<....e...............æ...æ...þ...þ.....AMALFI::REG$SERVER_E044........" 
  "ICC$ORB_REG$SERVER_K" = "T" 
      = "p.O<....e...............æ...æ...þ...þ.....AMALFI::REG$SERVER_K044........" 
  "ICC$ORB_REG$SERVER_U" = "T" 
      = "p.O<....e...............æ...æ...þ...þ.....AMALFI::REG$SERVER_U044........" 
 
(CLU$ICC_ORBS_ETNA) 
 
(CLU$ICC_ORBS_MILAN) 
 
(CLU$ICC_ORBS_ORVIET) 
 
  "ICC$ORB_ICC$PID_26000450_U" = "T" 
      = "VQ.p....k...............æ...æ...þ...þ.....ETNA::ICC$PID_26000450_U......." 
 
(CLU$ICC_ORBS_TURIN) 
 
. 
. 
. 
(ICC$REGISTRY_TABLE) 

5.5 Coordinating Startup Command Procedures

Immediately after a computer boots, it runs the site-independent command procedure SYS$SYSTEM:STARTUP.COM to start up the system and control the sequence of startup events. The STARTUP.COM procedure calls a number of other startup command procedures that perform cluster-specific and node-specific tasks.

The following sections describe how, by setting up appropriate cluster-specific startup command procedures and other system files, you can prepare the OpenVMS Cluster operating environment on the first installed computer before adding other computers to the cluster.

Reference: See also the HP OpenVMS System Manager's Manual for more information about startup command procedures.

5.5.1 OpenVMS Startup Procedures

Several startup command procedures are distributed as part of the OpenVMS operating system. The SYS$SYSTEM:STARTUP.COM command procedure executes immediately after OpenVMS is booted and invokes the site-specific startup command procedures described in the following table.

Procedure Name Invoked by Function
SYS$MANAGER:
SYPAGSWPFILES.COM
SYS$SYSTEM:
STARTUP.COM
A file to which you add commands to install page and swap files (other than the primary page and swap files that are installed automatically).
SYS$MANAGER:
SYCONFIG.COM
SYS$SYSTEM:
STARTUP.COM
Connects special devices and loads device I/O drivers.
SYS$MANAGER:
SYSECURITY.COM
SYS$SYSTEM:
STARTUP.COM
Defines the location of the security audit and archive files before it starts the security audit server.
SYS$MANAGER:
SYLOGICALS.COM
SYS$SYSTEM:
STARTUP.COM
Creates systemwide logical names, and defines system components as executive-mode logical names. (Clusterwide logical names should be defined in SYSTARTUP_VMS.COM.) Cluster common disks can be mounted at the end of this procedure.
SYS$MANAGER:
SYSTARTUP_VMS.COM
SYS$SYSTEM:
STARTUP.COM
Performs many of the following startup and login functions:
  • Mounts all volumes except the system disk.
  • Sets device characteristics.
  • Defines clusterwide logical names
  • Initializes and starts batch and print queues.
  • Installs known images.
  • Starts layered products.
  • Starts the DECnet software.
  • Analyzes most recent system failure.
  • Purges old operator log files.
  • Starts the LAT network (if used).
  • Defines the maximum number of interactive users.
  • Announces that the system is up and running.
  • Allows users to log in.

The directory SYS$COMMON:[SYSMGR] contains a template file for each command procedure that you can edit. Use the command procedure templates (in SYS$COMMON:[SYSMGR]*.TEMPLATE) as examples for customization of your system's startup and login characteristics.

5.5.2 Building Startup Procedures

The first step in preparing an OpenVMS Cluster shared environment is to build a SYSTARTUP_VMS command procedure. Each computer executes the procedure at startup time to define the operating environment.

Prepare the SYSTARTUP_VMS.COM procedure as follows:

Step Action
1 In each computer's SYS$SPECIFIC:[SYSMGR] directory, edit the SYSTARTUP_VMS.TEMPLATE file to set up a SYSTARTUP_VMS.COM procedure that:
  • Performs computer-specific startup functions such as the following:
    • Setting up dual-ported and local disks
    • Loading device drivers
    • Setting up local terminals and terminal server access
  • Invoking the common startup procedure (described next).
2 Build a common command procedure that includes startup commands that you want to be common to all computers. The common procedure might contain commands that:
  • Install images
  • Define logical names
  • Set up queues
  • Set up and mount physically accessible mass storage devices
  • Perform any other common startup functions

Note: You might choose to build these commands into individual command procedures that are invoked from the common procedure. For example, the MSCPMOUNT.COM file in the SYS$EXAMPLES directory is a sample common command procedure that contains commands typically used to mount cluster disks. The example includes comments explaining each phase of the procedure.

3 Place the common procedure in the SYS$COMMON:[SYSMGR] directory on a common system disk or other cluster-accessible disk.

Important: The common procedure is usually located in the SYS$COMMON:[SYSMGR] directory on a common system disk but can reside on any disk, provided that the disk is cluster accessible and is mounted when the procedure is invoked. If you create a copy of the common procedure for each computer, you must remember to update each copy whenever you make changes.


Previous Next Contents Index