HP OpenVMS Systems Documentation

Content starts here

DECnet-Plus for OpenVMS
Applications Installation and Advanced Configuration


Previous Contents Index

3.10 Reconfiguring the MOP Client Database

To reconfigure the MOP client database, proceed as follows from the Configuration Options menu:


* Which configuration option to perform?                 [1] : 8

Select Option 8 and press Return.


* Do you want to ADD or DELETE a MOP Client?           [ADD] :

Answer ADD to create an entity on the specified node, allocate resources for it, and open the service interface. Answer DELETE to delete the entity from the specified node and reclaim associated resources.


* Name of the MOP Client?                              : SUPERX

Specify the simple name of the client (for example, SUPERX).

If you elect to delete the MOP client, the procedure displays the following prompt:


* Are you sure you want to DELETE this client?               :

If you want to delete this client, answer YES.


* Circuit for 'superx'?                                      :

Specify the name of the MOP circuit you want to use for this client.


* Physical addresses for 'superx'?                           :

Specify the set of LAN addresses for the client on the circuit specified by the CIRCUIT characteristic.


* Secondary Loader for 'superx'?                             :

Specify the files you want loaded when the client requests a secondary loader during a downline load operation. File identifications are interpreted according to the file system of the local system.


* Tertiary Loader for 'superx'?                              :

Specify the files you want loaded when the client requests a tertiary loader during a downline load operation. File identifications are interpreted according to the file system of the local system.


* System Image for 'superx'?                                 :

Specify the files you want loaded when the client requests a system image during a downline load operation. File identifications are interpreted according to the file system of the local system.


* Diagnostic Image for 'superx'?                             :

Specify the files you want loaded when the client requests a diagnostic image during a downline load operation. File identifications are interpreted according to the file system of the local system.


* Management Image for 'superx'?                             :

Specify the files you want loaded when the client requests a management image during a downline load operation. File identifications are interpreted according to the file system of the local system.


* Script File for 'superx'?                                  :

Specify the files you want loaded when the client requests a CMIP initialization script during a downline load operation. File identifications are interpreted according to the file system of the local system.


* Dump File for 'superx'?                                    :

Specify the files to write to when the client is upline dumped.


* Dump Address for 'superx'?                             [0] :

Specify the address of the files to write to when the client is upline dumped.


* Verification for 'superx'?            [%X0000000000000000] :

Specify the verification string you want sent in a boot message to the specified client.


* Phase IV Client Address (aa.nnn) for 'superx'?             :

Specify the Phase IV node address you want given to the client system when it is loaded. This address is passed in a load characteristics message; whether it is needed depends on the software being loaded.


* Phase IV Client Name for 'superx'?                      [] :

Specify the Phase IV node name you want given to the client system when it is loaded. This name is passed in a load characteristics message; whether it is needed depends on the software being loaded.


* Phase IV Host Address for 'superx'?                        :

Specify the Phase IV node address you want passed as the host node address when a client is loaded. This address is passed in a load characteristics message; whether it is needed depends on the software being loaded.


* Phase IV Host Name for 'superx'?                        [] :

Specify the Phase IV node name you want passed as the host node name when a client is loaded. This name is passed in a load characteristics message; whether it is needed depends on the software being loaded.


* Do you want to generate NCL configuration scripts?   [YES] :

If you answer YES, the configuration program uses the information you have entered to create the MOP client NCL script. The configuration program then returns to the Configuration Options menu. To implement the MOP client NCL script, reboot the system or disable the entity and execute the script.

If you answer NO, the configuration procedure returns to the Configuration Options menu and does not generate NCL scripts.

3.11 Reconfiguring Event, MOP Client, Application Entities, and the Naming Search Path

You can use user-defined site-specific NCL scripts for the Event Dispatcher, MOP client, application, and search path entities. (See Section 2.2.4 for further information about creating the site-specific search path script.)

The network startup procedure calls four site-specific NCL scripts (if they exist) when the network is started. These scripts must be in SYS$MANAGER. They are called immediately after their DIGITAL-supplied counterparts are executed. The following table lists the scripts and their counterparts.

User-Defined
Site-Specific Script
DIGITAL-Supplied
Script
NET$EVENT_LOCAL.NCL NET$EVENT_STARTUP.NCL
NET$APPLICATION_LOCAL.NCL NET$APPLICATION_STARTUP.NCL
NET$MOP_CLIENT_LOCAL.NCL NET$MOP_CLIENT_STARTUP.NCL

These scripts are user-defined and user-maintained and thus will not be overwritten or deleted by net$configure. DIGITAL recommends that whenever possible, you place your site-specific changes in these user-defined NCL scripts.

Note

If you invoke net$configure.com to edit a standard NCL script (NET$entity_STARTUP.NCL), the standard NCL script is superseded and renamed to NET$entity_STARTUP.NCL-OLD (where entity is a particular entity name).

If you must make changes to the standard NCL scripts and you want to retain your modifications after invoking net$configure, you can either manually edit the NCL script to replace the user modifications or rename the appropriate NET$entity_STARTUP.NCL-OLD script back to NET$entity_STARTUP.NCL. Be sure to incorporate any new changes as well. The net$configure.com procedure flags these modifications the next time it checksums the scripts.

3.12 Reconfiguring the Cluster Alias

All or some nodes that are OpenVMS Cluster members can be represented in the network as a single node by establishing an alias for the cluster. The alias allows users access to common resources on the OpenVMS Cluster without knowing which nodes are members of the cluster. It is not necessary for every member of the cluster to join the alias. Refer to the DECnet-Plus for OpenVMS Network Management guide for more information about setting up a OpenVMS Cluster alias.

To reconfigure the cluster alias, proceed as follows:


* Which configuration option to perform?                 [1] : 9

Select Option 9 and press Return.


* Do you want to ADD or DELETE an alias?               [ADD] :

Answer ADD to add the specified node to the cluster. Answer DELETE to remove the specified node from the cluster.


* Full name of Cluster Alias                                 :

Specify the full name that uniquely identifies the cluster alias node (for example, IAF:.SALES.BOSTON).

If you are removing a node from the cluster, the procedure displays the following prompt:


* Are you sure you want to DELETE this alias?            [NO]:

If you answer YES to this prompt, the node is removed from the cluster.

3.12.1 Specifying an Address

If you are adding a node to the cluster, the procedure displays the following prompt:


* Cluster Alias Phase IV Address (aa.nnnn OR AA-00-04-00-xx-xx) :

Specify either the DECnet Phase IV node address or Ethernet physical address of the alias. The Phase IV node address has the format area-number.node-number (for example, 63.171). The Ethernet physical address has the format AA-00-04-00-xx-xx, where xx-xx is calculated from the Phase IV node address. To determine the Ethernet physical address, proceed as follows:

  1. Convert the Phase IV node address to its decimal equivalent as follows:


    (area-number * 1024) + node-number = decimal equivalent
    (For example, (63 * 1024) + 171 = 64683 decimal)
    
  2. Convert the decimal node address to its hexadecimal equivalent and reverse the order of the bytes to form the hexadecimal node address.


    (For example, 64683 decimal = FCAB hexadecimal,
       reversed = ABFC hexnodeaddress)
    
  3. Incorporate the hexadecimal node address in the following format:


    AA-00-04-00-hexnodeaddress
    (For example, AA-00-04-00-AB-FC)
    

3.12.2 Determining Selection Weight

The selection weight determines the number of sequential incoming connects passed to this alias member node in the round-robin sequence before proceeding to the next member node in the sequence. A value of zero means this node is not eligible to receive incoming connections to this alias address. Selection weight apportions incoming alias connections according to the capacity of each alias member.

For example, nodes with greater capacity should have larger values of selection weight, while OpenVMS Cluster satellites should generally have a value of zero. Specify a nonzero selection weight if this node is connected locally to a dual-ported disk, or if it will be serving any multihost disks, such as RFxx or HSC-connected disks, to other cluster members. Values between 0 and 10 are suggested.


* Selection weight for this cluster node [0 for satellites]  :

Enter a selection weight and press Return.

3.13 Replace MOP Client Configuration

To replace a MOP Client Configuration, select Option 10 from the ADVANCED Configuration Option menu.


* Which configuration option to perform?                 [1] : 10

The following question appears:


* Load MOP on this system?                             [YES] :

By default, MOP is not started by NET$STARTUP. In order to make this system service MOP requests, NET$STARTUP_MOP must be defined to signal NET$STARTUP to load the MOP software. This symbol is normally defined in SYS$STARTUP:NET$LOGICALS.COM.

Answering YES to this question will modify SYS$STARTUP:NET$LOGICALS.COM for you, to enable MOP service on this system. Answering NO will remove the logical name definition from SYS$STARTUP:NET$LOGICALS.COM. Note that this will have no effect if the NET$STARTUP_MOP is defined elsewhere.

If you answer YES, the following displays:


%NET$CONFIGURE-I-MOPCLIENTFND, MOP client NCL script already exists

* Replace MOP Client script file?                       [NO] : yes

Answer YES to create a new MOP Client NCL script file, otherwise press Return.

The procedure displays the Summary of Configuration and asks the following:


* Do you want to generate NCL configuration scripts?   [YES] :

If you answer YES, the configuration program uses the information you entered to generate modified NCL scripts and, in some cases, automatically modify the system's configuration. The configuration program then returns to the Configuration Options menu.

If you answer NO, the configuration procedure returns to the Configuration Options menu and does not generate any modified NCL scripts.

3.14 Configuring Satellite Nodes

Note

CAUTION: If your cluster is running mixed versions of DECnet, you cannot use this feature. Instead, you must configure the nodes independently by running net$configure on each system.

To select this option, you must have already configured the system using the ADVANCED configuration option, and net$configure is executing on a cluster system.

From the ADVANCED Configuration Option menu, select Option 11.


* Which configuration option to perform?                 [1] : 11

A submenu appears:


        Configuration Options:

        [0]     Return to main menu

        [1]     Autoconfigure Phase IV cluster nodes
        [2]     Full configuration of cluster node
        [3]     Configure local node

* Which configuration option to perform?                 [1] :

Autoconfigure Phase IV Cluster Nodes (Submenu Option 1)

If you select Option 1, it scans the system disk for evidence of satellite nodes that have not yet been configured to run DECnet-Plus. If it finds one, it creates sys$specific:[sys$startup]net$autoconfigure.com, causing the cluster member to automatically configure DECnet-Plus the next time it reboots. The procedure prompts you to enter the full name of a cluster alias.


* Fullname of cluster alias:                                 :

Supply the full node name of the cluster alias. If none is supplied, no cluster alias is configured for the systems being upgraded.


* Device containing system roots            [SYS$SYSDEVICE:] :

Configuring cluster satellites involves finding the system root from which the satellite boots. Normally, this is SYS$SYSDEVICE:, although it is possible to install system roots to a different volume.

The device given in response to this question is searched for all system roots. Those found that do not contain a checksum database are assumed to be Phase IV nodes, and are candidates for being flagged for DECnet-Plus autoconfiguration.


* Upgrade Phase IV cluster member FIGS?                [Yes] :

A system root was found that does not contain a DECnet-Plus checksum database, and is therefore assumed to be a Phase IV system. Answering YES to this question causes that cluster node to be flagged to run a DECnet-Plus autoconfiguration on its next reboot.


* What is the synonym name for this system?           [FIGS] :

Full Configuration of Cluster Node (Submenu Option 2)


* Which configuration option to perform?                 [1] : 2

If you select Option 2, it prompts for a cluster member name (and system root location). Once supplied, all net$configure modifications are made to the DECnet configuration for that cluster member. Note that this only allows a subset of configuration options.


* Cluster node name to be configured:                        : TPZERO

This is simply the node name of the cluster member to configure. The net$configure procedure attempts to find the system root for that cluster member (by scanning NET$MOP_CLIENT_STARTUP.NCL) to supply defaults for the two questions that follow.


* Device for TPZERO root:                    [SYS$SYSDEVICE] :

In configuring a cluster member other than the system on which net$configure executes, you must specify the location of the member's system root. The location is the disk device on which the cluster member's system root resides.

The default answer to this is either SYS$SYSDEVICE or the root device found for that system in NET$MOP_CLIENT_STARTUP.NCL


* Directory for TPZERO root:                                 : SYS2

In configuring a cluster member other than the system on which net$configure executes, you must also specify the system root directory. The system root directory is of the form "SYSxxxx," where "xxxx" is the hexadecimal root number from which that member loads.

Note that before net$configure returns to the main menu, it warns that all subsequent options will be applied to the cluster node just specified. Notice also that Option 5 (Configure Timezones) is not present when configuring other cluster members.


%NET$CONFIGURE-I-VERCHECKSUM, verifying checksums

All configuration options will be applied to cluster node TPZERO

Configure Local Node (Submenu Option 3)

If you select Option 3, it clears the action of Option 2; all subsequent net$configure modifications are made to the local system (as when net$configure was started).


* Which configuration option to perform?                 [1] : 3

3.15 Configuring Cluster Script Locations

This option allows the system manager to make the network startup scripts for NET$APPLICATION_STARTUP, NET$MOP_CLIENT_STARTUP, and NET$EVENT_STARTUP to be common for all cluster nodes. That is, a single copy of the script is shared by all systems in the cluster. This allows a single configuration for those scripts to be common to all systems, ensuring that all systems have the same application, MOP client, and event logging configuration.

It does this by copying the script from the SYS$SPECIFIC directory to the SYS$COMMON directory. Note that when it does so, it does not delete the script from the SYS$SPECIFIC directories for the other cluster systems. You must do this by rerunning the dialog for all cluster members.

To select this option, you must have already configured the system using the ADVANCED configuration option, and net$configure is executing on a cluster system.

From the ADVANCED Configuration Option menu, select Option 12.


* Which configuration option to perform?                 [1] : 12

For this example, the system manager selects Option 12 to create cluster common scripts for APPLICATION, EVENT and MOP_CLIENT. These cluster common scripts are created from the latest configuration on the currently executing system.


* Move the APPLICATION startup script to the cluster common area? [YES] :
%NET$CONFIGURE-I-MOVESCRIPT, created cluster common APPLICATION startup script
from SYS$SPECIFIC:[SYSMGR]NET$APPLICATION_STARTUP.NCL;
* Move the EVENT startup script to the cluster common area? [YES] :
%NET$CONFIGURE-I-MOVESCRIPT, created cluster common EVENT startup script from
SYS$SPECIFIC:[SYSMGR]NET$EVENT_STARTUP.NCL;
* Move the MOP_CLIENT startup script to the cluster common area? [YES] :
%NET$CONFIGURE-I-MOVESCRIPT, created cluster common MOP_CLIENT startup script
from SYS$SPECIFIC:[SYSMGR]NET$MOP_CLIENT_STARTUP.NCL;
%NET$CONFIGURE-I-MODCHECKSUM, checksumming NCL management scripts modified by
NET$CONFIGURE
%NET$CONFIGURE-I-CONFIGCOMPLETED, DECnet-Plus for OpenVMS configuration completed
%NET$CONFIGURE-I-USECOMMON, using cluster common APPLICATION script
%NET$CONFIGURE-I-USECOMMON, using cluster common EVENT script
%NET$CONFIGURE-I-USECOMMON, using cluster common MOP_CLIENT script

However, these cluster common scripts will not be used by the system "TPZERO," because it still has local copies. So, the system manager selects Option 10 to manage cluster nodes, and then suboption 2 to manage the configuration for node TPZERO.


       Configuration Options:

        [0]     Exit this procedure

        [1]     Perform an entire configuration
        [2]     Change naming information
        [3]     Configure Devices on this machine
        [4]     Configure Transports
        [5]     Configure Timezone Differential Factor
        [6]     Configure Event Dispatcher
        [7]     Configure Application database
        [8]     Configure MOP Client database
        [9]     Configure Cluster Alias
        [10]    Replace MOP Client configuration
        [11]    Configure satellite nodes
        [12]    Configure cluster script locations

* Which configuration option to perform?                 [0] : 11

        Configuration Options:

        [0]     Return to main menu

        [1]     Autoconfigure Phase IV cluster nodes
        [2]     Full configuration of cluster node
        [3]     Configure local node

* Which configuration option to perform?                 [0] : 2
* Cluster node name to be configured:                        : TPZERO
* Device for TPZERO root:                    [SYS$SYSDEVICE] :
* Directory for TPZERO root:                                 : SYS2
%NET$CONFIGURE-I-OVERRIDECOMMON, node specific APPLICATION script overrides
the cluster common settings
%NET$CONFIGURE-I-OVERRIDECOMMON, node specific EVENT script overrides the
cluster common settings
%NET$CONFIGURE-I-OVERRIDECOMMON, node specific MOP_CLIENT script overrides
the cluster common settings

All configuration options will be applied to cluster node TPZERO

Upon doing so, we are informed that TPZERO has local versions of these scripts that override the cluster common defaults. Selecting Option 11 allows the manager to delete these local overrides so that TPZERO will use the cluster common versions.



        Configuration Options:

        [0]     Exit this procedure

        [1]     Perform an entire configuration
        [2]     Change naming information
        [3]     Configure Devices on this machine
        [4]     Configure Transports
        [6]     Configure Event Dispatcher
        [7]     Configure Application database
        [8]     Configure MOP Client database
        [9]     Configure Cluster Alias
        [10]    Replace MOP Client configuration
        [11]    Configure satellite nodes
        [12]    Configure cluster script locations

* Which configuration option to perform?                 [0] : 12
* Delete the local APPLICATION startup script?          [No] : yes
%NET$CONFIGURE-I-DELETEDOVERRIDE, deleted system specific copy of the
APPLICATION startup script
* Delete the local EVENT startup script?                [No] : yes
%NET$CONFIGURE-I-DELETEDOVERRIDE, deleted system specific copy of the EVENT
startup script
* Delete the local MOP_CLIENT startup script?           [No] : yes
%NET$CONFIGURE-I-DELETEDOVERRIDE, deleted system specific copy of the
MOP_CLIENT startup script
%NET$CONFIGURE-I-MODCHECKSUM, checksumming NCL management scripts modified by
NET$CONFIGURE
%NET$CONFIGURE-I-CONFIGCOMPLETED, DECnet-Plus for OpenVMS configuration completed
%NET$CONFIGURE-I-USECOMMON, using cluster common APPLICATION script
%NET$CONFIGURE-I-USECOMMON, using cluster common EVENT script
%NET$CONFIGURE-I-USECOMMON, using cluster common MOP_CLIENT script

All configuration options will be applied to cluster node TPZERO


Previous Next Contents Index