|
OpenVMS Cluster Systems
7.12 Using a Common Command Procedure
Once you have created queues, you must start them to begin processing
batch and print jobs. In addition, you must make sure the queues are
started each time the system reboots, by enabling autostart for
autostart queues or by entering START/QUEUE commands for nonautostart
queues. To do so, create a command procedure containing the necessary
commands.
7.12.1 Command Procedure
You can create a common command procedure named, for example,
QSTARTUP.COM, and store it on a shared disk. With this method, each
node can share the same copy of the common QSTARTUP.COM procedure. Each
node invokes the common QSTARTUP.COM procedure from the common version
of SYSTARTUP. You can also include the commands to start queues in the
common SYSTARTUP file instead of in a separate QSTARTUP.COM file.
7.12.2 Examples
Example 7-1 shows commands used to create OpenVMS Cluster queues.
Example 7-1 Sample Commands for Creating
OpenVMS Cluster Queues |
$
(1)
$ DEFINE/FORM LN_FORM 10 /WIDTH=80 /STOCK=DEFAULT /TRUNCATE
$ DEFINE/CHARACTERISTIC 2ND_FLOOR 2
.
.
.
(2)
$ INITIALIZE/QUEUE/AUTOSTART_ON=(JUPITR::LPA0:)/START JUPITR_PRINT
$ INITIALIZE/QUEUE/AUTOSTART_ON=(SATURN::LPA0:)/START SATURN_PRINT
$ INITIALIZE/QUEUE/AUTOSTART_ON=(URANUS::LPA0:)/START URANUS_PRINT
.
.
.
(3)
$ INITIALIZE/QUEUE/BATCH/START/ON=JUPITR:: JUPITR_BATCH
$ INITIALIZE/QUEUE/BATCH/START/ON=SATURN:: SATURN_BATCH
$ INITIALIZE/QUEUE/BATCH/START/ON=URANUS:: URANUS_BATCH
.
.
.
(4)
$ INITIALIZE/QUEUE/START -
_$ /AUTOSTART_ON=(JUPITR::LTA1:,SATURN::LTA1,URANUS::LTA1) -
_$ /PROCESSOR=LATSYM /FORM_MOUNTED=LN_FORM -
_$ /RETAIN=ERROR /DEFAULT=(NOBURST,FLAG=ONE,NOTRAILER) -
_$ /RECORD_BLOCKING LN03$PRINT
$
$ INITIALIZE/QUEUE/START -
_$ /AUTOSTART_ON=(JUPITR::LTA2:,SATURN::LTA2,URANUS::LTA2) -
_$ /PROCESSOR=LATSYM /RETAIN=ERROR -
_$ /DEFAULT=(NOBURST,FLAG=ONE,NOTRAILER) /RECORD_BLOCKING -
_$ /CHARACTERISTIC=2ND_FLOOR LA210$PRINT
$
(5)
$ ENABLE AUTOSTART/QUEUES/ON=SATURN
$ ENABLE AUTOSTART/QUEUES/ON=JUPITR
$ ENABLE AUTOSTART/QUEUES/ON=URANUS
(6)
$ INITIALIZE/QUEUE/START SYS$PRINT -
_$ /GENERIC=(JUPITR_PRINT,SATURN_PRINT,URANUS_PRINT)
$
(7)
$ INITIALIZE/QUEUE/BATCH/START SYS$BATCH -
_$ /GENERIC=(JUPITR_BATCH,SATURN_BATCH,URANUS_BATCH)
$
|
Following are descriptions of each command or group of commands in
Example 7-1.
Command |
Description |
(1)
|
Define all printer forms and characteristics.
|
(2)
|
Initialize local print queues. In the example, these queues are
autostart queues and are started automatically when the node executes
the ENABLE AUTOSTART/QUEUES command. Although the /START qualifier is
specified to activate the autostart queues, they do not begin
processing jobs until autostart is enabled.
To enable autostart each time the system reboots, add the ENABLE
AUTOSTART/QUEUES command to your queue startup command procedure, as
shown in Example 7-2.
|
(3)
|
Initialize and start local batch queues on all nodes, including
satellite nodes. In this example, the local batch queues are not
autostart queues.
|
(4)
|
Initialize queues for remote LAT printers. In the example, these queues
are autostart queues and are set up to run on one of three nodes. The
queues are started on the first of those three nodes to execute the
ENABLE AUTOSTART command.
You must establish the logical devices LTA1 and LTA2 in the LAT
startup command procedure LAT$SYSTARTUP.COM on each node on which the
autostart queue can run. For more information, see the description of
editing LAT$SYSTARTUP.COM in the OpenVMS System Manager's Manual.
Although the /START qualifier is specified to activate these
autostart queues, they will not begin processing jobs until autostart
is enabled.
|
(5)
|
Enable autostart to start the autostart queues automatically. In the
example, autostart is enabled on node SATURN first, so the queue
manager starts the autostart queues that are set up to run on one of
several nodes.
|
(6)
|
Initialize and start the generic output queue SYS$PRINT. This is a
nonautostart queue (generic queues cannot be autostart queues).
However, generic queues are not stopped automatically when a system is
shut down, so you do not need to restart the queue each time a node
reboots.
|
(7)
|
Initialize and start the generic batch queue SYS$BATCH. Because this is
a generic queue, it is not stopped when the node shuts down. Therefore,
you do not need to restart the queue each time a node reboots.
|
7.12.3 Example
Example 7-2 illustrates the use of a common QSTARTUP command
procedure on a shared disk.
Example 7-2 Common Procedure to Start OpenVMS
Cluster Queues |
$!
$! QSTARTUP.COM -- Common procedure to set up cluster queues
$!
$!
(1)
$ NODE = F$GETSYI("NODENAME")
$!
$! Determine the node-specific subroutine
$!
$ IF (NODE .NES. "JUPITR") .AND. (NODE .NES. "SATURN") .AND. (NODE .NES. "URANUS")
$ THEN
$ GOSUB SATELLITE_STARTUP
$ ELSE
(2)
$!
$! Configure remote LAT devices.
$!
$ SET TERMINAL LTA1: /PERM /DEVICE=LN03 /WIDTH=255 /PAGE=60 -
/LOWERCASE /NOBROAD
$ SET TERMINAL LTA2: /PERM /DEVICE=LA210 /WIDTH=255 /PAGE=66 -
/NOBROAD
$ SET DEVICE LTA1: /SPOOLED=(LN03$PRINT,SYS$SYSDEVICE:)
$ SET DEVICE LTA2: /SPOOLED=(LA210$PRINT,SYS$SYSDEVICE:)
(3)
$ START/QUEUE/BATCH 'NODE'_BATCH
$ GOSUB 'NODE'_STARTUP
$ ENDIF
$ GOTO ENDING
$!
$! Node-specific subroutines start here
$!
(4)
$ SATELLITE_STARTUP:
$!
$! Start a batch queue for satellites.
$!
$ START/QUEUE/BATCH 'NODE'_BATCH
$ RETURN
$!
(5)
$JUPITR_STARTUP:
$!
$! Node-specific startup for JUPITR::
$! Setup local devices and start nonautostart queues here
$!
$ SET PRINTER/PAGE=66 LPA0:
$ RETURN
$!
$SATURN_STARTUP:
$!
$! Node-specific startup for SATURN::
$! Setup local devices and start nonautostart queues here
$!
.
.
.
$ RETURN
$!
$URANUS_STARTUP:
$!
$! Node-specific startup for URANUS::
$! Setup local devices and start nonautostart queues here
$!
.
.
.
$ RETURN
$!
$ENDING:
(6)
$! Enable autostart to start all autostart queues
$!
$ ENABLE AUTOSTART/QUEUES
$ EXIT
|
Following are descriptions of each phase of the common QSTARTUP.COM
command procedure in Example 7-2.
Command |
Description |
(1)
|
Determine the name of the node executing the procedure.
|
(2)
|
On all large nodes, set up remote devices connected by the LAT. The
queues for these devices are autostart queues and are started
automatically when the ENABLE AUTOSTART/QUEUES command is executed at
the end of this procedure.
In the example, these autostart queues were set up to run on one of
three nodes. The queues start when the first of those nodes executes
the ENABLE AUTOSTART/QUEUES command. The queue remains running as long
as one of those nodes is running and has autostart enabled.
|
(3)
|
On large nodes, start the local batch queue. In the example, the local
batch queues are nonautostart queues and must be started explicitly
with START/QUEUE commands.
|
(4)
|
On satellite nodes, start the local batch queue.
|
(5)
|
Each node executes its own subroutine. On node JUPITR, set up the line
printer device LPA0:. The queue for this device is an autostart queue
and is started automatically when the ENABLE AUTOSTART/QUEUES command
is executed.
|
(6)
|
Enable autostart to start all autostart queues.
|
7.13 Disabling Autostart During Shutdown
By default, the shutdown procedure disables autostart at the beginning
of the shutdown sequence. Autostart is disabled to allow autostart
queues with failover lists to fail over to another node. Autostart also
prevents any autostart queue running on another node in the cluster to
fail over to the node being shut down.
7.13.1 Options
You can change the time at which autostart is disabled in the shutdown
sequence in one of two ways:
Option |
Description |
1
|
Define the logical name SHUTDOWN$DISABLE_AUTOSTART as follows:
$ DEFINE/SYSTEM/EXECUTIVE SHUTDOWN$DISABLE_AUTOSTART
number-of-minutes
Set the value of
number-of-minutes to the number of minutes before shutdown
when autostart is to be disabled. You can add this logical name
definition to SYLOGICALS.COM. The value of
number-of-minutes is the default value for the node. If this
number is greater than the number of minutes specified for the entire
shutdown sequence, autostart is disabled at the beginning of the
sequence.
|
2
|
Specify the DISABLE_AUTOSTART
number-of-minutes option during the shutdown procedure. (The
value you specify for
number-of-minutes overrides the value specified for the
SHUTDOWN$DISABLE_AUTOSTART logical name.)
|
Reference: See the OpenVMS System Manager's Manual for more information
about changing the time at which autostart is disabled during the
shutdown sequence.
Chapter 8 Configuring an OpenVMS Cluster System
This chapter provides an overview of the cluster configuration command
procedures and describes the preconfiguration tasks required before
running either command procedure. Then it describes each major function
of the command procedures and the postconfiguration tasks, including
running AUTOGEN.COM.
8.1 Overview of the Cluster Configuration Procedures
Two similar command procedures are provided for configuring and
reconfiguring an OpenVMS Cluster system: CLUSTER_CONFIG_LAN.COM and
CLUSTER_CONFIG.COM. The choice depends on whether you use the LANCP
utility or DECnet for satellite booting in your cluster.
CLUSTER_CONFIG_LAN.COM provides satellite booting services with the
LANCP utility; CLUSTER_CONFIG.COM provides satellite booting sevices
with DECnet. See Section 4.5 for the factors to consider when
choosing a satellite booting service.
These configuration procedures automate most of the tasks required to
configure an OpenVMS Cluster system.
When you invoke CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM, the
following configuration options are displayed:
- Add a computer to the cluster
- Remove a computer from the cluster
- Change a computer's characteristics
- Create a duplicate system disk
- Make a directory structure for a new root on a system disk
- Delete a root from a system disk
By selecting the appropriate option, you can configure the cluster
easily and reliably without invoking any OpenVMS utilities directly.
Table 8-1 summarizes the functions that the configuration
procedures perform for each configuration option.
The phrase cluster configuration command procedure,
when used in this chapter, refers to both CLUSTER_CONFIG_LAN.COM and
CLUSTER_CONFIG.COM. The questions of the two configuration procedures
are identical except where they pertain to LANCP and DECnet.
Note: For help on any question in these command
procedures, type a question mark (?) at the question.
Table 8-1 Summary of Cluster Configuration Functions
Option |
Functions Performed |
ADD
|
Enables a node as a cluster member:
|
REMOVE
|
Disables a node as a cluster member:
- Deletes another computer's root directory and its contents from the
local computer's system disk. If the computer being removed is a
satellite, the cluster configuration command procedure updates
SYS$MANAGER:NETNODE_UPDATE.COM on the local computer.
- Updates the permanent and volatile remote node network databases on
the local computer.
- Removes the quorum disk.
|
CHANGE
|
Displays the CHANGE menu and prompts for appropriate information to:
- Enable or disable the local computer as a disk server
- Enable or disable the local computer as a boot server
- Enable or disable the Ethernet or FDDI LAN for cluster
communications on the local computer
- Enable or disable a quorum disk on the local computer
- Change a satellite's Ethernet or FDDI hardware address
- Enable or disable the local computer as a tape server
- Change the local computer's ALLOCLASS or TAPE_ALLOCLASS value
- Change the local computer's shared SCSI port allocation class value
- Enable or disable MEMORY CHANNEL for node-to-node cluster
communications on the local computer
|
CREATE
|
Duplicates the local computer's system disk and removes all system
roots from the new disk.
|
MAKE
|
Creates a directory structure for a new root on a system disk.
|
DELETE
|
Deletes a root from a system disk.
|
8.1.1 Before Configuring the System
Before invoking either the CLUSTER_CONFIG_LAN.COM or the
CLUSTER_CONFIG.COM procedure to configure an OpenVMS Cluster system,
perform the tasks described in Table 8-2.
Table 8-2 Preconfiguration Tasks
Task |
Procedure |
Determine whether the computer uses DECdtm.
|
When you add a computer to or remove a computer from a cluster that
uses DECdtm services, there are a number of tasks you need to do in
order to ensure the integrity of your data.
Reference: See the chapter about DECdtm services in
the OpenVMS System Manager's Manual for step-by-step instructions on setting up DECdtm in
an OpenVMS Cluster system.
If you are not sure whether your cluster uses DECdtm services,
enter this command sequence:
$ SET PROCESS /PRIVILEGES=SYSPRV
$ RUN SYS$SYSTEM:LMCP
LMCP> SHOW LOG
If your cluster does not use DECdtm services, the SHOW LOG command
will display a "file not found" error message. If your
cluster uses DECdtm services, it displays a list of the files that
DECdtm uses to store information about transactions.
|
Ensure the network software providing the satellite booting service is
up and running and all computers are connected to the LAN.
|
For nodes that will use the LANCP utility for satellite booting, run
the LANCP utility and enter the LANCP command LIST DEVICE/MOPDLL to
display a list of LAN devices on the system:
$ RUN SYS$SYSTEM:LANCP
LANCP> LIST DEVICE/MOPDLL
For nodes running DECnet for OpenVMS, enter the DCL command SHOW
NETWORK to determine whether the network is up and running:
$ SHOW NETWORK
VAX/VMS Network status for local node 63.452 VIVID on 5-NOV-1994
This is a nonrouting node, and does not have any network
information.
The designated router for VIVID is node 63.1021 SATURN.
This example shows that the node VIVID is running DECnet for
OpenVMS. If DECnet has not been started, the message
"SHOW-I-NONET, Network Unavailable" is displayed.
For nodes running DECnet--Plus, refer to DECnet for OpenVMS Network Management Utilities for
information about determining whether the DECnet--Plus network is up
and running.
|
Select MOP and disk servers.
|
Every OpenVMS Cluster configured with satellite nodes must include at
least one Maintenance Operations Protocol (MOP) and disk server. When
possible, select multiple computers as MOP and disk servers. Multiple
servers give better availability, and they distribute the work load
across more LAN adapters.
Follow these guidelines when selecting MOP and disk servers:
- Ensure that MOP servers have direct access to the system disk.
- Ensure that disk servers have direct access to the storage that
they are serving.
- Choose the most powerful computers in the cluster. Low-powered
computers can become overloaded when serving many busy satellites or
when many satellites boot simultaneously. Note, however, that two or
more moderately powered servers may provide better performance than a
single high-powered server.
- If you have several computers of roughly comparable power, it is
reasonable to use them all as boot servers. This arrangement gives
optimal load balancing. In addition, if one computer fails or is shut
down, others remain available to serve satellites.
- After compute power, the most important factor in selecting a
server is the speed of its LAN adapter. Servers should be equipped with
the highest-bandwidth LAN adapters in the cluster.
|
Make sure you are logged in to a privileged account.
|
Log in to a privileged account.
Rules: If you are adding a satellite, you must be
logged into the system manager's account on a boot server. Note that
the process privileges SYSPRV, OPER, CMKRNL, BYPASS, and NETMBX are
required, because the procedure performs privileged system operations.
|
Coordinate cluster common files.
|
If your configuration has two or more system disks, follow the
instructions in Chapter 5 to coordinate the cluster common files.
|
Optionally, disable broadcast messages to your terminal.
|
While adding and removing computers, many such messages are generated.
To disable the messages, you can enter the DCL command
REPLY/DISABLE=(NETWORK, CLUSTER). See also Section 10.6 for more
information about controlling OPCOM messages.
|
Predetermine answers to the questions asked by the cluster
configuration procedure.
|
Table 8-3 describes the data requested by the cluster configuration
command procedures.
|
8.1.2 Data Requested by the Cluster Configuration Procedures
The following table describes the questions asked by the cluster
configuration command procedures and describes how you might answer
them. The table is supplied here so that you can determine answers to
the questions before you invoke the procedure.
Because many of the questions are configuration specific, Table 8-3
lists the questions according to configuration type, and not in the
order they are asked.
Table 8-3 Data Requested by CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM
Information Required |
How to Specify or Obtain |
For all configurations |
Device name of cluster system disk on which root directories will be
created
|
Press Return to accept the default device name which is the translation
of the SYS$SYSDEVICE: logical name, or specify a logical name that
points to the common system disk.
|
Computer's root directory name on cluster system disk
|
Press Return to accept the procedure-supplied default, or specify a
name in the form SYS
x:
- For computers with direct access to the system disk,
x is a hexadecimal digit in the range of 1 through 9 or A
through D (for example, SYS1 or SYSA).
- For satellites,
x must be in the range of 10 through FFFF.
|
Workstation windowing system
|
System manager specifies. Workstation software must be installed before
workstation satellites are added. If it is not, the procedure indicates
that fact.
|
Location and sizes of page and swap files
|
This information is requested only when you add a computer to the
cluster. Press Return to accept the default size and location. (The
default sizes displayed in brackets by the procedure are minimum
values. The default location is the device name of the cluster system
disk.)
If your configuration includes satellite nodes, you may realize a
performance improvement by locating satellite page and swap files on a
satellite's local disk, if such a disk is available. The potential for
performance improvement depends on the configuration of your OpenVMS
Cluster system disk and network.
To set up page and swap files on a satellite's local disk, the
cluster configuration procedure creates a command procedure called
SATELLITE_PAGE.COM in the satellite's [SYS
n.SYSEXE] directory on the boot server's system disk. The
SATELLITE_PAGE.COM procedure performs the following functions:
Note: For page and swap disks that are shadowed, you
must edit the MOUNT and INIT commands in SATELLITE_PAGE.COM to the
appropriate syntax for mounting any specialized "local" disks
(that is, host-based shadowing disks (DS
xxx), or host-based RAID disks (DP
xxxx), or DECram disks (MDA
xxxx)) on the newly added node. CLUSTER_CONFIG(_LAN).COM does
not create the MOUNT and INIT commands required for SHADOW, RAID, or
DECram disks.
Note: To relocate the satellite's page and swap files
(for example, from the satellite's local disk to the boot server's
system disk, or the reverse) or to change file sizes:
- Create new PAGE and SWAP files on a shared device, as shown:
$ MCR SYSGEN CREATE
device:[dir] PAGEFILE.SYS/SIZE=
block-count
Note: If page and swap files will be created for a
shadow set, you must edit SATELLITE_PAGE accordingly.
- Rename the SYS$SPECIFIC:[SYSEXE]PAGEFILE.SYS and SWAPFILE.SYS files
to PAGEFILE.TMP and SWAPFILE.TMP.
- Reboot, and then delete the .TMP files.
- Modify the SYS$MANAGER:SYPAGSWPFILES.COM procedure to load the
files.
|
Value for local computer's allocation class (ALLOCLASS or
TAPE_ALLOCLASS) parameter.
|
The ALLOCLASS parameter can be used for a node allocation class or, on
Alpha computers, a port allocation class. Refer to Section 6.2.1 for
complete information about specifying allocation classes.
|
Physical device name of quorum disk
|
System manager specifies.
|
For systems running DECnet for OpenVMS |
Computer's DECnet node address for Phase IV
|
For the DECnet node address, you obtain this information as follows:
- If you are adding a computer, the network manager supplies the
address.
- If you are removing a computer, use the SHOW NETWORK command (as
shown in Table 8-2).
|
Computer's DECnet node name
|
Network manager supplies. The name must be from 1 to 6 alphanumeric
characters and
cannot include dollar signs ($) or underscores (_).
|
For systems running DECnet--Plus |
Computer's DECnet node address for Phase IV (if you need Phase IV
compatibility)
|
For the DECnet node address, you obtain this information as follows:
- If you are adding a computer, the network manager supplies the
address.
- If you are removing a computer, use the SHOW NETWORK command (as
shown in Table 8-2).
|
Node's DECnet full name
|
Determine the full name with the help of your network manager. Enter a
string comprised of:
- The namespace name, ending with a colon (:). This is optional.
- The root directory, designated by a period (.).
- Zero or more hierarchical directories, designated by a character
string followed by a period (.).
- The simple name, a character string that, combined with the
directory names, uniquely identifies the node. For example:
.SALES.NETWORKS.MYNODE
MEGA:.INDIANA.JONES
COLUMBUS:.FLATWORLD
|
SCS node name for this node
|
Enter the OpenVMS Cluster node name, which is a string of 6 or fewer
alphanumeric characters.
|
DECnet synonym
|
Press Return to define a DECnet synonym, which is a short name for the
node's full name. Otherwise, enter N.
|
Synonym name for this node
|
Enter a string of 6 or fewer alphanumeric characters. By default, it is
the first 6 characters of the last simple name in the full name. For
example:
+Full name: BIGBANG:.GALAXY.NOVA.BLACKHOLE
Synonym: BLACKH
Note: The node synonym does not need to be the same as
the OpenVMS Cluster node name.
|
MOP service client name for this node
|
Enter the name for the node's MOP service client when the node is
configured as a boot server. By default, it is the OpenVMS Cluster node
name (for example, the SCS node name). This name does not need to be
the same as the OpenVMS Cluster node name.
|
For systems running TCP/IP or the LANCP Utility for satellite booting, or both |
Computer's SCS node name (SCSNODE) and SCS system ID (SCSSYSTEMID)
|
These prompts are described in Section 4.2.3. If a system is running
TCP/IP, the procedure does not ask for a TCP/IP host name because a
cluster node name (SCSNODE) does not have to match a TCP/IP host name.
The TCP/IP host name might be longer than six characters, whereas the
SCSNODE name must be no more than six characters. Note that if the
system is running both DECnet and IP, then the procedure uses the
DECnet defaults.
|
For LAN configurations |
Cluster group number and password
|
This information is requested only when the CHANGE option is chosen.
See Section 2.5 for information about assigning cluster group numbers
and passwords.
|
Satellite's LAN hardware address
|
Address has the form
xx-xx-xx-xx-xx-xx. You must include the hyphens when you
specify a hardware address. Proceed as follows:
- ++On Alpha systems, enter the following command at the satellite's
console:
>>> SHOW NETWORK
Note that you can also use the SHOW CONFIG command.
- +On MicroVAX II and VAXstation II satellite nodes. When the DECnet
for OpenVMS network is running on a boot server, enter the following
commands at the satellite's console:
>>> B/100 XQA0
Bootfile: READ_ADDR
- +On MicroVAX 2000 and VAXstation 2000 satellite nodes. When the
DECnet for OpenVMS network is running on a boot server, enter the
following commands at successive console-mode prompts:
>>> T 53
2 ?>>> 3
>>> B/100 ESA0
Bootfile: READ_ADDR
If the second prompt appears as 3 ?>>>, press Return.
- +On MicroVAX 3
xxx and 4
xxx series satellite nodes, enter the following command at the
satellite's console:
>>> SHOW ETHERNET
|
+DECnet--Plus full-name functionality is VAX specific.
++Alpha specific.
|