|
HP OpenVMS Cluster Systems
Note
Assuming that the interface which is active is EIA, configure the
satellite with EIA, if it does not boot with EIA then try with EIB
subsequently. If the wrong interface name is given, satellite node
fails with the messages during booting.
|
Field |
Description |
(1)
|
Enter the LAN adapter's hardware address.
|
(2)
|
Enter the TCP/IP address.
|
(3)
|
Enter the TCP/IP gateway.
|
(4)
|
Enter the TCP/IP network mask address.
|
(5)
|
Enable IP for cluster communication.
|
(6)
|
The UDP port number to be used for cluster communication. The UDP port
number must be same on all members of the cluster. Also, ensure that
there is no other cluster in your environment using the same UDP port
number and this port number must not be used by any other application.
|
(7)
|
Enter the IP multicast address for cluster, if IP multicasting is
enabled. By default, the IP multicast address is selected from the
administratively scoped IP multicast address range of 239.242.x.y. The
last two octets x and y are generated based on the cluster group
number. In the above example, the cluster group number is 1985 and can
be calculated as follows:
X= 1985/256
Y= 1985 - (256 *x)
The system administrator can override the default multicast address
with a unique address for their environment. The multicast address is
modified based on the cluster group number or it can be added to .DAT
file.
|
(8)
|
TTL is the time-to-live for IP multicast packets. It specifies the
number of hops allowed for IP multicast packets.
|
(9)
|
Enter "yes" to enter the IP Unicast address of remote nodes of the
cluster, which are not reachable using IP multicast address.
|
(10)
|
In the TCP/IP configuration, select option 0 to set the target node to
JASMIN, which is the satellite node, and will be added to the cluster.
|
(11)
|
Proceed with configuration steps to configure node JASMIN.
|
(12)
|
Enter the system device for JASMIN, which DSA2.
|
(13)
|
Enter JASMIN's root, which SYS14.
|
(14)
|
Enter the controller information on which IP will be configured for
cluster communication. The controller information is obtained from the
console of JASMIN as explained in the beginning of the configuration.
|
(15)
|
Select an option to add a primary address for IE0 (IP interface name of
controller EIA).
|
(16)
|
Enable the use of IE0 for Cluster over IP and proceed with the rest of
the configuration.
|
Step 3. Executing the CLUSTER_CONFIG_LAN.COM Procedure
Continue to run the CLUSTER_CONFIG_LAN.COM to complete the cluster
configuration procedure.
Adjusting protection on DSA2:[SYS14.][SYSEXE]PE$IP_CONFIG.DAT;1
Will JASMIN be a disk server [N]? Y
Enter a value for JASMIN's ALLOCLASS parameter [0]: 15
Updating BOOTP database with satellite information for JASMIN..
Size of pagefile for JASMIN [RETURN for AUTOGEN sizing]?
A temporary pagefile will be created until resizing by AUTOGEN. The
default size below is arbitrary and may or may not be appropriate.
Size of temporary pagefile [10000]? [Return]
Size of swap file for JASMIN [RETURN for AUTOGEN sizing]? [Return]
A temporary swap file will be created until resizing by AUTOGEN. The
default size below is arbitrary and may or may not be appropriate.
Size of temporary swap file [8000]? [Return]
NOTE: IA64 satellite node JASMIN requires DOSD if capturing the
system state in a dumpfile is desired after a system crash.
Will a local disk on JASMIN be used for paging and swapping (Y/N)? N
If you specify a device other than DISK$TULIPSYS: for JASMIN's
page and swap files, this procedure will create PAGEFILE_JASMIN.SYS
and SWAPFILE_JASMIN.SYS in the <SYSEXE> directory on the device you
specify.
What is the device name for the page and swap files [DISK$TULIPSYS:]? [Return]
%SYSGEN-I-CREATED, DSA2:<SYS14.SYSEXE>PAGEFILE.SYS;1 created
%SYSGEN-I-CREATED, DSA2:<SYS14.SYSEXE>SWAPFILE.SYS;1 created
The configuration procedure has completed successfully.
|
The node JASMIN is configured to join the cluster. After the first boot
of JASMIN, AUTOGEN.COM will run automatically.
Step 4. Updating the PE$IP_CONFIG.DAT File
To ensure that the nodes join the cluster, PE$IP_CONFIG.DAT must be
consistent through all the members of the cluster. Copy the
SYS$SYSTEM:PE$IP_CONFIG.DAT file that is created on node JASMIN's root
to the other nodes, ORCHID and TULIP.
Step 5. Refreshing the Unicast list
On both ORCHID and TULIP, to update the new unicast list in the
PE$IP_CONFIG.DAT file, enter the following command for PEDRIVER:
You can also use SYSMAN and run the command cluster wide.
Note
The following rule is applicable when IP unicast address is used for
node discovery. A node is allowed to join the cluster only if its IP
address is present in the existing members of the
SYS$SYSTEM:PE$IP_CONFIG.DAT file.
|
Step 6. Running AUTOGEN and Rebooting the Node
After the first boot of JASMIN, AUTOGEN.COM runs automatically. JASMIN
will now be able to join the existing cluster consisting of nodes
ORCHID and TULIP.
JASMIN$ @SYS$UPDATE:AUTOGEN GETDATA REBOOT
|
8.2.3.5 Adding an Integrity server Node to a Cluster over IP with Logical LAN Failover set
This section describes how to add a node, ORCHID to an existing
two-node cluster, JASMIN and TULIP. The Logical LAN failover set is
created and configured on ORCHID. ORCHID can survive failure if a local
LAN card fails and it will switchover to other interface configured in
the logical LAN failover set.
Step 1. Configuring the Logical LAN Failover set
Execute the following commands to create a logical LAN failover set.
$ MC LANCP
LANCP>DEFINE DEVICE LLB/ENABLE/FAILOVER=(EIA0, EIB0))
|
Reboot the system and during reboot, the following console message is
displayed:
%LLB0, Logical LAN event at 2-SEP-2008 14:52:50.06
%LLB0, Logical LAN failset device created
|
Step 2: Executing CLUSTER_CONFIG_LAN
Execute CLUSTER_CONFIG_LAN.COM on node ORCHID and select the
appropriate option as shown:
ORCHID$@SYS$MANAGER:CLUSTER_CONFIG_LAN
Cluster/IPCI Configuration Procedure
CLUSTER_CONFIG_LAN Version V2.84
Executing on an IA64 System
DECnet-Plus is installed on this node.
IA64 satellites will use TCP/IP BOOTP and TFTP services for downline loading.
TCP/IP is installed and running on this node.
Enter a "?" for help at any prompt. If you are familiar with
the execution of this procedure, you may want to mute extra notes
and explanations by invoking it with "@CLUSTER_CONFIG_LAN BRIEF".
This IA64 node is not currently a cluster member.
MAIN Menu
1. ADD ORCHID to existing cluster, or form a new cluster.
2. MAKE a directory structure for a new root on a system disk.
3. DELETE a root from a system disk.
4. EXIT from this procedure.
Enter choice [4]: 1
Is the node to be a clustered node with a shared SCSI/FIBRE-CHANNEL bus (Y/N)? n
What is the node's SCS node name? orchid
IA64 node, using LAN/IP for cluster communications. PEDRIVER will be loaded.
No other cluster interconnects are supported for IA64 nodes.
Enter this cluster's group number: 1985
Enter this cluster's password:
Re-enter this cluster's password for verification:
ENABLE IP for cluster communications (Y/N)? Y
UDP port number to be used for Cluster Communication over IP[49152]? [Return]
Enable IP multicast for cluster communication(Y/N)[Y]? [Return]
What is IP the multicast address[239.242.7.193]? 239.242.7.193
What is the TTL (time to live) value for IP multicast packets [32]? [Return]
Do you want to enter unicast address(es)(Y/N)[Y]? [Return]
What is the unicast address[Press [RETURN] to end the list]? 10.0.1.2
What is the unicast address[Press [RETURN] to end the list]? 10.0.2.3
What is the unicast address[Press [RETURN] to end the list]? 10.0.2.2
What is the unicast address[Press [RETURN] to end the list]? [Return]
*****************************************************************
Cluster Communications over IP has been enabled. Now
CLUSTER_CONFIG_LAN will run the SYS$MANAGER:TCPIP$CONFIG
procedure. Please select the IP interfaces to be used for
Cluster Communications over IP (IPCI). This can be done
selecting "Core Environment" option from the main menu
followed by the "Interfaces" option. You may also use
this opportunity to configure other aspects.
****************************************************************
Press Return to continue ...
TCP/IP Network Configuration Procedure
This procedure helps you define the parameters required
to run HP TCP/IP Services for OpenVMS on this system.
%TCPIP-I-IPCI, TCP/IP Configuration is limited to IPCI.
-TCPIP-I-IPCI, Rerun TCPIP$CONFIG after joining the cluster.
HP TCP/IP Services for OpenVMS Interface & Address Configuration Menu
Hostname Details: Configured=Not Configured, Active=nodeg
Configuration options:
0 - Set The Target Node (Current Node: ORCHID)
1 - LE0 Menu (LLA0: TwistedPair 100mbps)
2 - IE1 Menu (EIB0: TwistedPair 100mbps)
[E] - Exit menu
Enter configuration option: 1
* IPCI Address Configuration *
Only IPCI addresses can be configured in the current environment.
After configuring your IPCI address(es) it will be necessary to
run TCPIP$CONFIG once your node has joined the cluster.
IPv4 Address may be entered with CIDR bits suffix.
E.g. For a 16-bit netmask enter 10.0.1.1/16
Enter IPv4 Address []:10.0.1.2
Default netmask calculated from class of IP address: 255.0.0.0
IPv4 Netmask may be entered in dotted decimal notation,
(e.g. 255.255.0.0), or as number of CIDR bits (e.g. 16)
Enter Netmask or CIDR bits [255.0.0.0]: 255.255.255.0
Requested configuration:
Node : ORCHID
Interface: IE0
IPCI : Yes
Address : 10.0.1.2/24
Netmask : 255.255.254.0 (CIDR bits: 23)
* Is this correct [YES]:
Updated Interface in IPCI configuration file: SYS$SYSROOT:[SYSEXE]TCPIP$CLUSTER.
DAT;
HP TCP/IP Services for OpenVMS Interface & Address Configuration Menu
Hostname Details: Configured=Not Configured, Active=nodeg
Configuration options:
0 - Set The Target Node (Current Node: ORCHID)
1 - LE0 Menu (LLA0: TwistedPair 100mbps)
2 - 10.0.1.2/24 ORCHID IPCI
3 - IE1 Menu (EIB0: TwistedPair 100mbps)
[E] - Exit menu
Enter configuration option: E
Enter your Default Gateway address []: 10.0.1.1
* The default gateway will be: 10.0.1.1 Correct [NO]: YES
Updated Default Route in IPCI configuration file: SYS$SYSROOT:[SYSEXE]TCPIP$CLUSTER.DAT;
TCPIP-I-IPCIDONE, Finished configuring IPCI address information.
SYS$SYSTEM:PE$IP_CONFIG.DAT file generated in node ORCHID's root shown below
! CLUSTER_CONFIG_LAN creating for CHANGE operation on 15-JUL-2008 15:23:56.05
multicast_address=239.242.7.193
ttl=32
udp_port=49152
unicast=10.0.2.3
unicast=10.0.2.2
unicast=10.0.1.2
SYS$SYSTEM:TCPIP$CLUSTER.DAT file generated in node ORCHID's root shown below
interface=LE1,LLB0,10.0.1.2,255.0.0.0
default_route=10.0.1.1
|
Step 3. Completing the Configuration Procedure
Continue to run the CLUSTER_CONFIG_LAN.COM to complete the cluster
configuration procedure. For more information, see Section 8.2.3.1.
Step 4. Updating the PE$IP_CONFIG.DAT file
To ensure that the nodes join the cluster, PE$IP_CONFIG.DAT must be
consistent through all the members of the cluster. Copy the
SYS$SYSTEM:PE$IP_CONFIG.DAT file that is created on node JASMIN to the
other nodes ORCHID and TULIP.
Step 5. Refreshing the Unicast list
On both ORCHID and TULIP, to update the new unicast list in the
PE$IP_CONFIG.DAT file, enter the following command for PEDRIVER:
You can also use SYSMAN and run the command cluster wide.
Step 6. Running AUTOGEN and Rebooting the Node
After the first boot of ORCHID, AUTOGEN.COM will run automatically.
ORCHID will now be able to join the existing cluster consisting of
nodes JASMIN and LOTUS.
ORCHID$ @SYS$UPDATE:AUTOGEN GETDATA REBOOT
|
8.2.4 Adding a Quorum Disk
To enable a quorum disk on a node or nodes, use the cluster
configuration procedure as described in Table 8-5.
Table 8-5 Preparing to Add a Quorum Disk Watcher
IF... |
THEN... |
Other cluster nodes are already enabled as quorum disk watchers.
|
Perform the following steps:
- Log in to the computer that is to be enabled as the quorum disk
watcher and run CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM.
- Execute the CHANGE function and select menu item 7 to enable a
quorum disk. (See Section 8.4.)
- Update the current system parameters and reboot the node. (See
Section 8.6.1.)
|
The cluster does not contain any quorum disk watchers.
|
Perform the following steps:
- Perform the preceding steps 1 and 2 for each node to be enabled as
a quorum disk watcher.
- Reconfigure the cluster according to the instructions in
Section 8.6.
|
8.3 Removing Computers
To disable a computer as an OpenVMS Cluster member:
- Determine whether removing a member will cause you to lose quorum.
Use the SHOW CLUSTER command to display the CL_QUORUM and CL_VOTES
values.
IF removing members... |
THEN... |
Will cause you to lose quorum
|
Perform the steps in the following list:
Caution: Do not perform these steps until you are
ready to reboot the entire OpenVMS Cluster system. Because you are
reducing quorum for the cluster, the votes cast by the node being
removed could cause a cluster partition to be formed.
|
Will not cause you to lose quorum
|
Proceed as follows:
- Perform an orderly shutdown on the node being removed by invoking
the SYS$SYSTEM:SHUTDOWN.COM command procedure (described in
Section 8.6.3).
- If the node was a voting member, use the DCL command SET
CLUSTER/EXPECTED_VOTES to reduce the value of quorum.
|
Reference: Refer also to Section 10.11 for
information about adjusting expected votes.
- Invoke CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM on an active
OpenVMS Cluster computer and select the REMOVE option.
- Use the information in Table 8-6 to determine whether
additional actions are required.
Table 8-6 Preparing to Remove Computers from an OpenVMS Cluster
IF... |
THEN... |
You are removing a voting member.
|
You must, after the REMOVE function completes, reconfigure the cluster
according to the instructions in Section 8.6.
|
The page and swap files for the computer being removed do not reside on
the same disk as the computer's root directory tree.
|
The REMOVE function does not delete these files. It displays a message
warning that the files will not be deleted, as in Example 8-6. If you
want to delete the files, you must do so after the REMOVE function
completes.
|
You are removing a computer from a cluster that uses DECdtm services.
|
Make sure that you have followed the step-by-step instructions in the
chapter on DECdtm services in the HP OpenVMS System Manager's Manual. These instructions
describe how to remove a computer safely from the cluster, thereby
preserving the integrity of your data.
|
Note: When the REMOVE function deletes the computer's
entire root directory tree, it generates OpenVMS RMS informational
messages while deleting the directory files. You can ignore these
messages.
8.3.1 Example
Example 8-6 illustrates the use of CLUSTER_CONFIG_LAN.COM on BHAGAT
to remove satellite GOMTHI from the cluster.
Example 8-6 Sample Interactive
CLUSTER_CONFIG_LAN.COM Session to Remove a Satellite with Local Page
and Swap Files |
$ @CLUSTER_CONFIG_LAN.COM
Cluster/IPCI Configuration Procedure
CLUSTER_CONFIG_LAN Version V2.84
Executing on an IA64 System
DECnet-Plus is installed on this node.
IA64 satellites will use TCP/IP BOOTP and TFTP services for downline loading.
TCP/IP is installed and running on this node.
Enter a "?" for help at any prompt. If you are familiar with
the execution of this procedure, you may want to mute extra notes
and explanations by invoking it with "@CLUSTER_CONFIG_LAN BRIEF".
BHAGAT is an IA64 system and currently a member of a cluster
so the following functions can be performed:
MAIN Menu
1. ADD an IA64 node to the cluster.
2. REMOVE a node from the cluster.
3. CHANGE a cluster member's characteristics.
4. CREATE a duplicate system disk for BHAGAT.
5. MAKE a directory structure for a new root on a system disk.
6. DELETE a root from a system disk.
7. EXIT from this procedure.
Enter choice [7]: 2
The REMOVE command disables a node as a cluster member.
o It deletes the node's root directory tree.
o If the node has entries in SYS$DEVICES.DAT, any port allocation
class for shared SCSI bus access on the node must be re-assigned.
If the node being removed is a voting member, EXPECTED_VOTES
in each remaining cluster member's MODPARAMS.DAT must be adjusted.
The cluster must then be rebooted.
For instructions, see the "OpenVMS Cluster Systems" manual.
CAUTION: The REMOVE command does not remove the node name from any
network databases. Also, if a satellite has been set up for booting
with multiple hardware addresses, the satellite's aliases are not
cleared from the LANACP boot database.
What is the node's SCS node name? GOMTHI
Verifying BOOTP satellite node database...
Verifying that $1$DKA0:[SYS10] is GOMTHI's root...
Are you sure you want to remove node GOMTHI (Y/N)? Y
WARNING: GOMTHI's page and swap files will not be deleted.
They do not reside on $1$DKA0:.
Deleting directory tree $1$DKA0:<SYS10...>
%DELETE-I-FILDEL, $1$DKA0:<SYS10.SYS$I18N.LOCALES>SYSTEM.DIR;1 deleted (16 blocks)
.
.
.
.
%DELETE-I-FILDEL, $1$DKA0:<SYS10>VPM$SERVER.DIR;1 deleted (16 blocks)
%DELETE-I-TOTAL, 21 files deleted (336 blocks)
%DELETE-I-FILDEL, $1$DKA0:<0,0>SYS10.DIR;1 deleted (16 blocks)
System root $1$DKA0:<SYS10> deleted.
Updating BOOTP database...
Removing rights identifier for GOMTHI...
The configuration procedure has completed successfully.
|
8.3.2 Removing a Quorum Disk
To disable a quorum disk on a node or nodes, use the cluster
configuration command procedure as described in Table 8-7.
Table 8-7 Preparing to Remove a Quorum Disk Watcher
IF... |
THEN... |
Other cluster nodes will still be enabled as quorum disk watchers.
|
Perform the following steps:
- Log in to the computer that is to be disabled as the quorum disk
watcher and run CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM.
- Execute the CHANGE function and select menu item 7 to disable a
quorum disk (see Section 8.4).
- Reboot the node (see Section 8.6.7).
|
All quorum disk watchers will be disabled.
|
Perform the following steps:
- Perform the preceding steps 1 and 2 for all computers with the
quorum disk enabled.
- Reconfigure the cluster according to the instructions in
Section 8.6.
|
|