HP OpenVMS Cluster Systems


Previous Contents Index

9.4.2 Changing the Default Boot Adapter

To change the default boot adapter, you need the physical address of the alternate LAN adapter. You use the address to update the satellite's node definition in the DECnet or LANCP database on the MOP servers so that they recognize the satellite (described in Section 9.4.4). Use the SHOW CONFIG command to find the LAN address of additional adapters.

9.4.3 Booting from Multiple LAN Adapters (Alpha Only)

On Alpha systems, availability can be increased by using multiple LAN adapters for booting because access to the MOP server and disk server can occur via different LAN adapters. To use multiple adapter booting, perform the steps in the following table.
Step Task
1 Obtain the physical addresses of the additional LAN adapters.
2 Use these addresses to update the node definition in the DECnet or LANCP database on some of the MOP servers so that they recognize the satellite (described in Section 9.4.4).
3 If the satellite is already defined in the DECnet database, skip to step 4. If the satellite is not defined in the DECnet database, specify the SYS$SYSTEM:APB.EXE downline load file in the Alpha network database.
4 Specify multiple LAN adapters on the boot command line. (Use the SHOW DEVICE or SHOW CONFIG console command to obtain the names of adapters.)

The following command line is the same as that used for booting from a single LAN adapter on an Alpha system (see Section 9.4.2) except that it lists two LAN adapters, eza0 and ezb0, as the devices from which to boot:


>>> b -flags 0,1 eza0, ezb0

In this command line:
Stage What Happens
1 MOP booting is attempted from the first device (eza0). If that fails, MOP booting is attempted from the next device (ezb0). When booting from network devices, if the MOP boot attempt fails from all devices, then the console starts again from the first device.
2 Once the MOP load has completed, the boot driver starts the NISCA protocol on all of the LAN adapters. The NISCA protocol is used to access the system disk server and finish loading the operating system (see Appendix F).

9.4.4 Enabling Satellites to Use Alternate LAN Adapters for Booting

OpenVMS supports only one hardware address attribute per remote node definition in either a DECnet or LANCP database. To enable a satellite with multiple LAN adapters to use any LAN adapter to boot into the cluster, two different methods are available:

Defining Pseudonodes for Additional LAN Adapters

When defining a pseudonode with a different DECnet or LANCP address:

For DECnet, follow the procedure shown in Table 9-3. For LANCP, follow the procedure shown in Table 9-4.

Table 9-3 Procedure for Defining a Pseudonode Using DECnet MOP Services
Step Procedure Comments
1 Display the node's existing definition using the following NCP command:
$ RUN SYS$SYSTEM:NCP

NCP> SHOW NODE node-name CHARACTERISTICS
This command displays a list of the satellite's characteristics, such as its hardware address, load assist agent, load assist parameter, and more.
2 Create a pseudonode by defining a unique DECnet address and node name at the NCP command prompt, as follows:
DEFINE NODE
pseudo-area.pseudo-number -

NAME pseudo-node-name -
LOAD FILE APB.EXE -
LOAD ASSIST AGENT SYS$SHARE:NISCS_LAA.EXE -
LOAD ASSIST PARAMETER disk$sys:[< root.>] -
HARDWARE ADDRESS xx-xx-xx-xx-xx-xx
This example is specific to an Alpha node.


Table 9-4 Procedure for Defining a Pseudonode Using LANCP MOP Services
Step Procedure Comments
1 Display the node's existing definition using the following LANCP command:
$ RUN SYS$SYSTEM:LANCP

LANCP> SHOW NODE node-name
This command displays a list of the satellite's characteristics, such as its hardware address and root directory address.
2 Create a pseudonode by defining a unique LANCP address and node name at the LANCP command prompt, as follows:
DEFINE NODE
pseudo-node-name -

/FILE= APB.EXE -
/ROOT= disk$sys:[< root.>] -
/ADDRESS= xx-xx-xx-xx-xx-xx
This example is specific to an Alpha node.

Creating Different Node Databases for Different Boot Nodes

When creating different DECnet or LANCP databases on different boot nodes:

The procedures are similar for DECnet and LANCP, but the database file names, utilities, and commands differ. For the DECnet procedure, see Table 9-5. For the LANCP procedure, see Table 9-6.

Table 9-5 Procedure for Creating Different DECnet Node Databases
Step Procedure Comments
1 Define the logical name NETNODE_REMOTE to different values on different nodes so that it points to different files. The logical NETNODE_REMOTE points to the working copy of the remote node file you are creating.
2 Locate NETNODE_REMOTE.DAT files in the system-specific area for each node.

On each of the various boot servers, ensure that the hardware address is defined as a unique address that matches one of the adapters on the satellite. Enter the following commands at the NCP command prompt:

DEFINE NODE
area.number -

NAME node-name -
LOAD FILE APB.EXE -
LOAD ASSIST AGENT SYS$SHARE:NISCS_LAA.EXE -
LOAD ASSIST PARAMETER disk$sys:[< root.>] -
HARDWARE ADDRESS xx-xx-xx-xx-xx-xx
A NETNODE_REMOTE.DAT file located in [SYS0.SYSEXE] overrides one located in [SYS0.SYSCOMMON.SYSEXE] for a system booting from system root 0.

If the NETNODE_REMOTE.DAT files are copies of each other, the node name, LOAD FILE, load assist agent, and load assist parameter are already set up. You need only specify the new hardware address.

Because the default hardware address is stored in NETUPDATE.COM, you must also edit this file on the second boot server.

Table 9-6 Procedure for Creating Different LANCP Node Databases
Step Procedure Comments
1 Define the logical name LAN$NODE_DATABASE to different values on different nodes so that it points to different files. The logical LAN$NODE_DATABASE points to the working copy of the remote node file you are creating.
2 Locate LAN$NODE_DATABASE.DAT files in the system-specific area for each node.

On each of the various boot servers, ensure that the hardware address is defined as a unique address that matches one of the adapters on the satellite. Enter the following commands at the LANCP command prompt:

DEFINE NODE
node-name -

/FILE= APB.EXE -
/ROOT= disk$sys:[< root.>] -
/ADDRESS= xx-xx-xx-xx-xx-xx
If the LAN$NODE_DATABASE.DAT files are copies of each other, the node name and the FILE and ROOT qualifier values are already set up. You need only specify the new address.

Once the satellite receives the MOP downline load from the MOP server, the satellite uses the booting LAN adapter to connect to any node serving the system disk. The satellite continues to use the LAN adapters on the boot command line exclusively until after the run-time drivers are loaded. The satellite then switches to using the run-time drivers and starts the local area OpenVMS Cluster protocol on all of the LAN adapters.

For additional information about the NCP command syntax, refer to DECnet for OpenVMS Network Management Utilities.

For DECnet--Plus: On an OpenVMS Cluster running DECnet--Plus, you do not need to take the same actions in order to support a satellite with more than one LAN adapter. The DECnet--Plus support to downline load a satellite allows for an entry in the database that contains a list of LAN adapter addresses. See the DECnet--Plus documentation for complete information.

9.4.5 Configuring MOP Service

On a boot node, CLUSTER_CONFIG.COM enables the DECnet MOP downline load service on the first circuit that is found in the DECnet database.

On systems running DECnet for OpenVMS, display the circuit state and the service (MOP downline load service) state using the following command:


$ MCR NCP SHOW CHAR KNOWN CIRCUITS


           . 
           . 
           . 
   Circuit = SVA-0 
 
   State                    = on 
   Service                  = enabled  
           . 
           . 
           . 

This example shows that circuit SVA-0 is in the ON state with the MOP downline service enabled. This is the correct state to support MOP downline loading for satellites.

Enabling MOP service on additional LAN adapters (circuits) must be performed manually. For example, enter the following NCP commands to enable service for the circuit QNA-1:


$ MCR NCP SET CIRCUIT QNA-1 STATE OFF
$ MCR NCP SET CIRCUIT QNA-1 SERVICE ENABLED STATE ON
$ MCR NCP DEFINE CIRCUIT QNA-1 SERVICE ENABLED

Reference: For more details, refer to DECnet-Plus for OpenVMS Network Management.

9.4.6 Controlling Satellite Booting

You can control the satellite boot process in a number of ways. Table 9-7 shows examples specific to DECnet for OpenVMS. Refer to the DECnet--Plus documentation for equivalent information.

Table 9-7 Controlling Satellite Booting
Method Comments
Disable MOP service on MOP servers temporarily
Until the MOP server can complete its own startup operations, boot requests can be temporarily disabled by setting the DECnet Ethernet circuit to a "Service Disabled" state as shown:
1 To disable MOP service during startup of a MOP server, enter the following commands:
$ MCR NCP DEFINE CIRCUIT MNA-1 -

_$ SERVICE DISABLED
$ @SYS$MANAGER:STARTNET
$ MCR NCP DEFINE CIRCUIT MNA-1 -
_$ SERVICE ENABLED
2 To reenable MOP service later, enter the following commands in a command procedure so that they execute quickly and so that DECnet service to the users is not disrupted:
$ MCR NCP

NCP> SET CIRCUIT MNA-1 STATE OFF
NCP> SET CIRCUIT MNA-1 SERVICE ENABLED
NCP> SET CIRCUIT MNA-1 STATE ON
This method prevents the MOP server from servicing the satellites; it does not prevent the satellites from requesting a boot from other MOP servers.

If a satellite that is requesting a boot receives no response, it will make fewer boot requests over time. Thus, booting the satellite may take longer than normal once MOP service is reenabled.

  1. MNA-1 represents the MOP service circuit.

    After entering these commands, service will be disabled in the volatile database. Do not disable service permanently.

  2. Reenable service as shown.
Disable MOP service for individual satellites
You can disable requests temporarily on a per-node basis in order to clear a node's information from the DECnet database. Clear a node's information from DECnet database on the MOP server using NCP, then reenable nodes as desired to control booting:
1 To disable MOP service for a given node, enter the following command:
$ MCR NCP

NCP> CLEAR NODE satellite HARDWARE ADDRESS
2 To reenable MOP service for that node, enter the following command:
$ MCR NCP

NCP> SET NODE satellite ALL
This method does not prevent satellites from requesting boot service from another MOP server.
  1. After entering the commands, service will be disabled in the volatile database. Do not disable service permanently.
  2. Reenable service as shown.
Bring satellites to console prompt on shutdown
Use any of the following methods to halt a satellite so that it halts (rather than reboots) upon restoration of power.
1 Use the VAXcluster Console System (VCS).
2 Stop in console mode upon Halt or powerup:

For Alpha computers:

>>> (SET AUTO_ACTION HALT)

3 Set up a satellite so that it will stop in console mode when a HALT instruction is executed according to the instructions in the following list.
  1. Enter the following NCP commands so that a reboot will load an image that does a HALT instruction:
    $ MCR NCP
    
    NCP> CLEAR NODE node LOAD ASSIST PARAMETER
    NCP> CLEAR NODE node LOAD ASSIST AGENT
    NCP> SET NODE node LOAD FILE -
    _ MOM$LOAD:READ_ADDR.SYS
  2. Shut down the satellite, and specify an immediate reboot using the following SYSMAN command:
    $ MCR SYSMAN
    
    SYSMAN> SET ENVIRONMENT/NODE= satellite
    SYSMAN> DO @SYS$UPDATE:AUTOGEN REBOOT
  3. When you want to allow the satellite to boot normally, enter the following NCP commands so that OpenVMS will be loaded later:
    $ MCR NCP
    
    NCP> SET NODE satellite ALL
If you plan to use the DECnet Trigger operation, it is important to use a program to perform a HALT instruction that causes the satellite to enter console mode. This is because systems that support remote triggering only support it while the system is in console mode.
  1. Some, but not all, satellites can be set up so they halt upon restoration of power or execution of a HALT instruction rather than automatically rebooting.

    Note: You need to enter the SET commands only once on each system because the settings are saved in nonvolatile RAM.

  2. The READ_ADDR.SYS program, which is normally used to find out the Ethernet address of a satellite node, also executes a HALT instruction upon its completion.

Important: When the SET HALT command is set up as described in Table 9-7, a power failure will cause the satellite to stop at the console prompt instead of automatically rebooting when power is restored. This is appropriate for a mass power failure, but if someone trips over the power cord for a single satellite it can result in unnecessary unavailability.

You can provide a way to scan and trigger a reboot of satellites that go down this way by simply running a batch job periodically that performs the following tasks:

  1. Uses the DCL lexical function F$GETSYI to check each node that should be in the cluster.
  2. Checks the CLUSTER_MEMBER lexical item.
  3. Issues an NCP TRIGGER command for any satellite that is not currently a member of the cluster.

9.5 Configuring and Booting Satellite Nodes (Integrity servers)

Satellite

Any OpenVMS Version 8.3 system or a nPartition of a cell-based system can be used as a satellite. Support for nPartitions may require a firmware upgrade.

Satellite boot is supported over the core I/O LAN adapters only. All satellite systems must contain at least one local disk to support crash dumps and saving of the error log buffers across reboots. Diskless systems will not be able to take crash dumps in the event of abnormal software termination.

Boot Server

All Integrity server systems supported by OpenVMS Version 8.3 are supported as boot servers. At this time, HP does not support cross-architecture booting for Integrity server satellite systems, so any cluster containing Integrity server satellite systems must have at least one Integrity server system to act as a boot node as well.

Required Software

As with other satellite systems, the system software is read off of a disk served by one or more nodes to the cluster. The satellite system disk may be the same as the boot server's system disk but need not be. Unlike with Alpha satellites, where it was recommended but not required that the system disk be mounted on the boot server, Integrity server satellite systems require that the system disk be mounted on the boot server.

TCP/IP must be installed on the boot server's system disk. OpenVMS Version 8.3 must be installed on both the boot server's system disk and the satellite's system disk if different.

TCP/IP must be configured with BOOTP, TFTP and one or more interfaces enabled. At least one configured interface must be connected to a segment visible to the satellite systems. The boot server and all satellite systems will require an IP address. See the HP TCP/IP Services for OpenVMS Version 5.6 Installation and Configuration for details about configuring TCP/IP Services for OpenVMS.

9.5.1 Collecting Information from the Satellite System

If the satellite has a local disk with a version of OpenVMS installed, log in. If not, you may boot the installation DVD and select option 8 (Execute DCL commands and procedures) and execute the following commands:


$ LANCP :== $LANCP 
$ LANCP SHOW CONFIG 
 
LAN Configuration: 
Device Parent Medium/User Version Link Speed Duplex Size MAC Address       Current Address   Type 
------------- ----------- ------- ---- ----- ------ ---- ----------------- ---------------   ---------- 
EIB0          Ethernet    X-16    Up   1000  Full   1500 00-13-21-5B-86-49 00-13-21-5B-86-49 UTP i82546 
EIA0          Ethernet    X-16    Up   1000  Full   1500 00-13-21-5B-86-48 00-13-21-5B-86-48 UTP i82546 

Record the MAC address for the adapter you will use for booting. You will need it when defining the satellite system to the boot server. If the current address differs from the MAC address, use the MAC address.

9.5.2 Setting up the Satellite System for Booting and Crashing

If the satellite has a local disk with a version of OpenVMS installed, log in. If not, you may boot the installation DVD and select option 8 (Execute DCL commands and procedures.) Use SYS$MANAGER:BOOT_OPTIONS.COM to add a boot menu option for the network adapter from which you are booting. The procedure will ask you if this network entry is for a satellite boot and if so, it will set the Memory Disk boot option flag (0x200000) for that boot menu entry. The memory disk flag is required for satellite boot.

If you intended to use the system primarily for satellite boot, place the network boot option at position 1. The satellite system also requires DOSD (Dump Off the System Disk) for crash dumps and saving the unwritten error log buffers across reboots and crashes. BOOT_OPTIONS.COM may also be used to manage the DOSD device list. You may wish to create the DOSD device list at this time. See the HP OpenVMS System Managers Manual, Volume 2: Tuning, Monitoring, and Complex Systems for information about setting up a DOSD device list.


Previous Next Contents Index