  | 
		
OpenVMS Cluster Systems
 
 
4.3 Installing Software Licenses
While rebooting at the end of the installation procedure, the system
displays messages warning that you must install the operating system
software and the OpenVMS Cluster software license. The OpenVMS Cluster
software supports the OpenVMS License Management Facility (LMF).
License units for clustered systems are allocated on an unlimited
system-use basis.
4.3.1 Guidelines
 
Be sure to install all OpenVMS Cluster licenses and all licenses for
layered products and DECnet as soon as the system is available.
Procedures for installing licenses are described in the release notes
distributed with the software kit and in the OpenVMS License Management Utility Manual. Additional
licensing information is described in the respective SPDs.
 
Use the following guidelines when you install software licenses:
 
  - Install an OpenVMS Cluster Software for Alpha license for each
  Alpha processor in the OpenVMS Cluster.
  
 - Install an OpenVMS Cluster Software for VAX license for each VAX
  processor in an OpenVMS Cluster system.
  
 - Install or upgrade licenses for layered products that will run on
  all nodes in an OpenVMS Cluster system.
  
 - OpenVMS Product Authorization Keys (PAKs) that have the Alpha
  option can be loaded and used only on Alpha processors. However, PAKs
  can be located in a license database (LDB) that is
  shared by both Alpha and VAX processors.
  
 - Do not load Availability PAKs for VAX systems (Availability PAKs
  that do not include the Alpha option) on Alpha systems.
  
 - PAK types such as Activity PAKs (also known as concurrent or n-user
  PAKs) and Personal Use PAKs (identified by the RESERVE_UNITS option)
  work on both VAX and Alpha systems.
  
 - Compaq recommends that you perform licensing tasks using an Alpha
  LMF.
  
4.4 Installing Layered Products
By installing layered products before other nodes are added to the
OpenVMS Cluster, the software is installed automatically on new members
when they are added to the OpenVMS Cluster system.
 
Note: For clusters with multiple system disks (VAX,
Alpha, or both) you must perform a separate installation for each
system disk.
4.4.1 Procedure
 
Table 4-2 describes the actions you take to install layered
products on a common system disk.  
 
  Table 4-2 Installing Layered Products on a Common System Disk
  
    | Phase  | 
    Action  | 
   
  
    | 
      Before installation
     | 
    
Perform one or more of the following steps, as necessary for your
system.
- Check each node's system parameters and modify the values, if
necessary. Refer to the layered-product installation guide or release
notes for information about adjusting system parameter values.
 
 
 - If necessary, disable logins on each node that boots from the disk
 using the DCL command SET LOGINS/INTERACTIVE=0. Send a broadcast
 message to notify users about the installation.
  
     | 
   
  
    | 
      Installation
     | 
    
      Refer to the appropriate layered-product documentation for
      product-specific installation information. Perform the installation
      once for each system disk.
     | 
   
  
    | 
      After installation
     | 
    
Perform one or more of the following steps, as necessary for your
system.
- If necessary, create product-specific files in the SYS$SPECIFIC
directory on each node. (The installation utility describes whether or
not you need to create a directory in SYS$SPECIFIC.) When creating
files and directories, be careful to specify exactly where you want the
file to be located:
- Use SYS$SPECIFIC or SYS$COMMON instead of SYS$SYSROOT.
 - Use SYS$SPECIFIC:[SYSEXE] or SYS$COMMON:[SYSEXE] instead of
SYS$SYSTEM.
  
 
Reference: Section 5.3 describes directory
structures in more detail.
  - Modify files in SYS$SPECIFIC if the installation procedure tells
you to do so. Modify files on each node that boots from this system
disk.
 - Reboot each node to ensure that:
- The node is set up to run the layered product correctly.
 - The node is running the latest version of the layered product.
  
  - Manually run the installation verification procedure (IVP) if you
 did not run it during the layered product installation. Run the IVP
 from at least one node in the OpenVMS Cluster, but preferably from all
 nodes that boot from this system disk.
  
     | 
   
 
4.5 Configuring and Starting a Satellite Booting Service 
After you have installed the operating system and the required licenses
on the first OpenVMS Cluster computer, you can configure and start a
satellite booting service. You can use the LANCP utility, or DECnet
software, or both.
 
Compaq recommends LANCP for booting OpenVMS Cluster satellites. LANCP
has shipped with the OpenVMS operating system since Version 6.2. It
provides a general-purpose MOP booting service that can be used for
booting satellites into an OpenVMS Cluster. (LANCP can service all
types of MOP downline load requests, including those from terminal
servers, LAN resident printers, and X terminals, and can be used to
customize your LAN environment.)
 
DECnet provides a MOP booting service for booting OpenVMS Cluster
satellites, as well as other local and wide area network services,
including task-to-task communications for applications.
 
 
  Note 
If you plan to use LANCP in place of DECnet, and you also plan to move
from DECnet Phase IV to DECnet--Plus, Compaq recommends the following
order:
  - Replace DECnet with LANCP for satellite booting (MOP downline load
  service) using LAN$POPULATE.COM.
  
 - Migrate from DECnet Phase IV to DECnet-Plus.
  
     | 
   
 
There are two cluster configuration command procedures,
CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM. CLUSTER_CONFIG_LAN.COM
uses LANCP to provide MOP services to boot satellites;
CLUSTER_CONFIG.COM uses DECnet for the same purpose.
 
Before choosing LANCP, DECnet, or both, consider the following factors:
 
  - Applications you will be running on your cluster 
 DECnet
  task-to-task communications is a method commonly used for communication
  between programs that run on different nodes in a cluster or a network.
  If you are running a program with that dependency, you need to run
  DECnet. If you are not running any programs with that dependency, you
  do not need to run DECnet.
   - Limiting applications that require DECnet to certain nodes in your
  cluster 
 If you are running applications that require DECnet
  task-to-task communications, you can run those applications on a subset
  of the nodes in your cluster and restrict DECnet usage to those nodes.
  You can use LANCP software on the remaining nodes and use a different
  network, such as DIGITAL TCP/IP Services for OpenVMS, for other network
  services.
   - Managing two types of software for the same purpose 
 If you are
  already using DECnet for booting satellites, you may not want to
  introduce another type of software for that purpose. Introducing any
  new software requires time to learn and manage it.
   - LANCP MOP services can coexist with DECnet MOP services in an
  OpenVMS Cluster in the following ways:
  
    - Running on different systems 
 For example, DECnet MOP service is
    enabled on some of the systems on the LAN and LAN MOP is enabled on
    other systems.
     - Running on different LAN devices on the same system 
 For
    example, DECnet MOP service is enabled on a subset of the available LAN
    devices on the system and LAN MOP is enabled on the remainder.
     - Running on the same LAN device on the same system but targeting a
    different set of nodes for service 
 For example, both DECnet MOP and
    LAN MOP are enabled but LAN MOP has limited the nodes to which it will
    respond. This allows DECnet MOP to respond to the remaining nodes.
    
  
Instructions for configuring both LANCP and DECnet are provided in this
section.
4.5.1 Configuring and Starting the LANCP Utility 
 
You can use the LAN Control Program (LANCP) utility to configure a
local area network (LAN). You can also use the LANCP utility, in place
of DECnet or in addition to DECnet, to provide support for booting
satellites in an OpenVMS Cluster and for servicing all types of MOP
downline load requests, including those from terminal servers, LAN
resident printers, and X terminals.
 
Reference: For more information about using the LANCP
utility to configure a LAN, see the OpenVMS System Manager's Manual, Volume 2:  Tuning, Monitoring, and Complex Systems and the OpenVMS System Management Utilities  Reference Manual: A--L.
4.5.2 Booting Satellite Nodes with LANCP
 
The LANCP utility provides a general-purpose MOP booting service that
can be used for booting satellites into an OpenVMS Cluster. It can also
be used to service all types of MOP downline load requests, including
those from terminal servers, LAN resident printers, and X terminals. To
use LANCP for this purpose, all OpenVMS Cluster nodes must be running
OpenVMS Version 6.2 or higher.
 
The CLUSTER_CONFIG_LAN.COM cluster configuration command procedure uses
LANCP in place of DECnet to provide MOP services to boot satellites.
 
Note: If you plan to use LANCP in place of DECnet, and
you also plan to move from DECnet for OpenVMS (Phase IV) to
DECnet--Plus, Compaq recommends the following order:
 
  - Replace DECnet with LANCP for satellite booting (MOP downline load
  service), using LAN$POPULATE.COM.
  
 - Migrate from DECnet for OpenVMS to DECnet--Plus.
  
4.5.3 Data Files Used by LANCP 
LANCP uses the following data files:
 
  - SYS$SYSTEM:LAN$DEVICE_DATABASE.DAT 
 This file maintains
  information about devices on the local node. By default, the file is
  created in SYS$SPECIFIC:[SYSEXE], and the system looks for the file in
  that location. However, you can modify the file name or location for
  this file by redefining the systemwide logical name LAN$DEVICE_DATABASE.
   - SYS$SYSTEM:LAN$NODE_DATABASE.DAT 
 This file contains information
  about the nodes for which LANCP will supply boot service. This file
  should be shared among all nodes in the OpenVMS Cluster, including both
  Alpha and VAX systems. By default, the file is created in
  SYS$COMMON:[SYSEXE], and the system looks for the file in that
  location. However, you can modify the file name or location for this
  file by redefining the systemwide logical name LAN$NODE_DATABASE.
  
4.5.4 Using LAN MOP Services in New Installations
To use LAN MOP services for satellite booting in new installations,
follow these steps:
 
  - Add the startup command for LANCP. 
 You should start up LANCP as
  part of your system startup procedure. To do this, remove the comment
  from the line in SYS$MANAGER:SYSTARTUP_VMS.COM that runs the
  LAN$STARTUP command procedure. If your OpenVMS Cluster system will have
  more than one system disk, see Section 4.5.3 for a description of
  logicals that can be defined for locating LANCP configuration files.
 
  
    
       
      
$ @SYS$STARTUP:LAN$STARTUP
 
 |   
     You should now either reboot the system or invoke the preceding
    command procedure from the system manager's account to start LANCP.
   - Follow the steps in Chapter 8 for configuring an OpenVMS
  Cluster system and adding satellites. Use the CLUSTER_CONFIG_LAN.COM
  command procedure instead of CLUSTER_CONFIG.COM. If you invoke
  CLUSTER_CONFIG.COM, it gives you the option to switch to running
  CLUSTER_CONFIG_LAN.COM if the LANCP process has been started.
  
4.5.5 Using LAN MOP Services in Existing Installations
To migrate from DECnet MOP services to LAN MOP services for satellite
booting, follow these steps:
 
  - Redefine the LANCP database logical names. 
 This step is
  optional. If you want to move the data files used by LANCP,
  LAN$DEVICE_DATABASE and LAN$NODE_DATABASE, off the system disk,
  redefine their systemwide logical names. Add the definitions to the
  system startup files.
   - Use LANCP to create the LAN$DEVICE_DATABASE 
 The permanent
  LAN$DEVICE_DATABASE is created when you issue the first LANCP DEVICE
  command. To create the database and get a list of available devices,
  enter the following commands:
 
  
    
       
      
$ MCR LANCP
LANCP> LIST DEVICE /MOPDLL
%LANCP-I-FNFDEV, File not found, LAN$DEVICE_DATABASE
%LANACP-I-CREATDEV, Created LAN$DEVICE_DATABASE file
Device Listing, permanent database:
  --- MOP Downline Load Service Characteristics ---
Device    State   Access Mode      Client            Data Size
------    -----   -----------      ------            ---------
ESA0    Disabled NoExlusive  NoKnownClientsOnly     246 bytes
FCA0    Disabled NoExlusive  NoKnownClientsOnly     246 bytes
 
 |   
   - Use LANCP to enable LAN devices for MOP booting. 
 By default,
  the LAN devices have MOP booting capability disabled. Determine the LAN
  devices for which you want to enable MOP booting. Then use the DEFINE
  command in the LANCP utility to enable these devices to service MOP
  boot requests in the permanent database, as shown in the following
  example:
 
  
    
       
      
LANCP> DEFINE DEVICE ESA0:/MOP=ENABLE
 
 |   
   - Run LAN$POPULATE.COM (found in SYS$EXAMPLES) to obtain MOP booting
  information and to produce LAN$DEFINE and LAN$DECNET_MOP_CLEANUP, which
  are site specific. 
 LAN$POPULATE extracts all MOP booting
  information from a DECnet Phase IV NETNODE_REMOTE.DAT file or from the
  output of the DECnet--Plus NCL command SHOW MOP CLIENT * ALL.  For
  DECnet Phase IV sites, the LAN$POPULATE procedure scans all DECnet
  areas (1--63) by default. If you MOP boot systems from only a single or
  a few DECnet areas, you can cause the LAN$POPULATE procedure to operate
  on a single area at a time by providing the area number as the P1
  parameter to the procedure, as shown in the following example
  (including log):
 
  
    
       
      
$ @SYS$EXAMPLES:LAN$POPULATE 15
 LAN$POPULATE - V1.0
 Do you want help (Y/N) <N>:
 LAN$DEFINE.COM has been successfully created.
 To apply the node definitions to the LANCP permanent database,
 invoke the created LAN$DEFINE.COM command procedure.
        Compaq recommends that you review LAN$DEFINE.COM and remove any
        obsolete entries prior to executing this command procedure.
 A total of 2 MOP definitions were entered into LAN$DEFINE.COM
 |   
   - Run LAN$DEFINE.COM to populate LAN$NODE_DATABASE. 
 LAN$DEFINE
  populates the LANCP downline loading information into the LAN node
  database, SYS$COMMON:[SYSEVE]LAN$NODE_DATABASE.DAT file. Compaq
  recommends that you review LAN$DEFINE.COM and remove any obsolete
  entries before executing it.  In the following sequence, the
  LAN$DEFINE.COM procedure that was just created is displayed on the
  screen and then executed:
 
  
    
       
      
$ TYPE LAN$DEFINE.COM
 $ !
 $ ! This file was generated by LAN$POPULATE.COM on 16-DEC-1996 09:20:31
 $ ! on node CLU21.
 $ !
 $ ! Only DECnet Area 15 was scanned.
 $ !
 $ MCR LANCP
 Define Node PORK    /Address=08-00-2B-39-82-85 /File=APB.EXE -
                  /Root=$21$DKA300:<SYS11.> /Boot_type=Alpha_Satellite
 Define Node JYPIG   /Address=08-00-2B-A2-1F-81 /File=APB.EXE -
                  /Root=$21$DKA300:<SYS10.> /Boot_type=Alpha_Satellite
 EXIT
$ @LAN$DEFINE
 %LANCP-I-FNFNOD, File not found, LAN$NODE_DATABASE
 -LANCP-I-CREATNOD, Created LAN$NODE_DATABASE file
 $
 |   
     The following example shows a LAN$DEFINE.COM command procedure that
    was generated by LAN$POPULATE for migration from DECnet--Plus to LANCP.
 
  
    
       
      
$ ! LAN$DEFINE.COM - LAN MOP Client Setup
$ !
$ ! This file was generated by LAN$POPULATE.COM at  8-DEC-1996 14:28:43.31
$ ! on node BIGBOX.
$ !
$ SET NOON
$ WRITE SYS$OUTPUT "Setting up MOP DLL clients in LANCP...
$ MCR LANCP
SET    NODE SLIDER
/ADDRESS=08-00-2B-12-D8-72/ROOT=BIGBOX$DKB0:<SYS10.>/BOOT_TYP
E=VAX_satellite/FILE=NISCS_LOAD.EXE
DEFINE NODE SLIDER
/ADDRESS=08-00-2B-12-D8-72/ROOT=BIGBOX$DKB0:<SYS10.>/BOOT_TYP
E=VAX_satellite/FILE=NISCS_LOAD.EXE
EXIT
$ !
$  WRITE SYS$OUTPUT "DECnet Phase V to LAN MOPDLL client migration complete!"
$  EXIT
 
 |   
   - Run LAN$DECNET_MOP_CLEANUP.COM. 
 You can use
  LAN$DECNET_MOP_CLEANUP.COM to remove the clients' MOP downline loading
  information from the DECnet database. Compaq recommends that you review
  LAN$DECNET_MOP_CLEANUP.COM and remove any obsolete entries before
  executing it.  The following example shows a
  LAN$DECNET_MOP_CLEANUP.COM command procedure that was generated by
  LAN$POPULATE for migration from DECnet--Plus to LANCP.
   Note: When migrating from DECnet--Plus, additional
  cleanup is necessary. You must edit your NCL scripts (*.NCL) manually.
 
  
    
       
      
$ ! LAN$DECNET_MOP_CLEANUP.COM - DECnet MOP Client Cleanup
$ !
$ ! This file was generated by LAN$POPULATE.COM at  8-DEC-1995 14:28:43.47
$ ! on node BIGBOX.
$ !
$ SET NOON
$ WRITE SYS$OUTPUT "Removing MOP DLL clients from DECnet database..."
$ MCR NCL
DELETE NODE 0 MOP CLIENT SLIDER
EXIT
$ !
$  WRITE SYS$OUTPUT "DECnet Phase V MOPDLL client cleanup complete!"
$  EXIT
 
 |   
   - Start LANCP. 
 To start LANCP, execute the startup command
  procedure as follows:
 
  
    
       
      
$ @SYS$STARTUP:LAN$STARTUP
  %RUN-S-PROC_ID, identification of created process is 2920009B
  $
 
 |   
     You should start up LANCP for all boot nodes as part of your system
    startup procedure. To do this, include the following line in your
    site-specific startup file (SYS$MANAGER:SYSTARTUP_VMS.COM):
 
  
    
       
      
$ @SYS$STARTUP:LAN$STARTUP
 
 |   
     If you have defined logicals for either LAN$DEVICE_DATABASE or
    LAN$NODE_DATABASE, be sure that these are defined in your startup files
    prior to starting up LANCP.
   - Disable DECnet MOP booting. 
 If you use LANCP for satellite
  booting, you may no longer need DECnet to handle MOP requests. If this
  is the case for your site, you can turn off this capability with the
  appropriate NCP command (DECnet for OpenVMS) or NCL commands
  (DECnet--Plus).
  
For more information about the LANCP utility, see the OpenVMS System Manager's Manual and
the OpenVMS System Management Utilities  Reference Manual.
4.5.6 Configuring DECnet
 
The process of configuring the DECnet network typically entails several
operations, as shown in Table 4-3. An OpenVMS Cluster running both
implementations of DECnet requires a system disk for DECnet for OpenVMS
(Phase IV) and another system disk for DECnet--Plus (Phase V).
 
Note: DECnet for OpenVMS implements Phase IV of
Digital Network Architecture (DNA). DECnet--Plus implements Phase V of
DNA. The following discussions are specific to the DECnet for OpenVMS
product.
 
Reference: Refer to the DECnet--Plus documentation for
equivalent DECnet--Plus configuration information.  
 
  Table 4-3 Procedure for Configuring the DECnet Network
  
    | Step  | 
    Action  | 
   
  
    | 
      1
     | 
    
Log in as system manager and execute the NETCONFIG.COM command
procedure as shown. Enter information about your node when prompted.
Note that DECnet--Plus nodes execute the NET$CONFIGURE.COM command
procedure.
 
      Reference: See the DECnet for OpenVMS or the
      DECnet--Plus documentation, as appropriate, for examples of these
      procedures.
      | 
   
  
    | 
      2
     | 
    
When a node uses multiple LAN adapter connections to the same LAN and
also uses DECnet for communications, you must
disable DECnet use of all but one of the LAN devices.
  To do this, remove all but one of the lines and circuits associated
with the adapters connected to the same LAN or extended LAN from the
DECnet configuration database after the NETCONFIG.COM procedure is run.
 
 
 For example, issue the following commands to invoke NCP and disable
 DECnet use of the LAN device XQB0:
$ RUN SYS$SYSTEM:NCP
 NCP> PURGE CIRCUIT QNA-1 ALL
 NCP> DEFINE CIRCUIT QNA-1 STA OFF
 NCP> EXIT
 
 
       
      References:
        See Guidelines for OpenVMS Cluster  Configurations for more information about distributing
      connections to LAN segments in OpenVMS Cluster configurations.
        See the DECnet--Plus documentation for information about removing
      routing circuits associated with all but one LAN adapter. (Note that
      the LAN adapter issue is not a problem if the DECnet--Plus node uses
      extended addressing and does not have any Phase IV compatible
      addressing in use on any of the routing circuits.)
      | 
   
  
    | 
      3
     | 
    
      Make remote node data available clusterwide. NETCONFIG.COM creates in
      the SYS$SPECIFIC:[SYSEXE] directory the permanent remote-node database
      file NETNODE_REMOTE.DAT, in which remote-node data is maintained. To
      make this data available throughout the OpenVMS Cluster, you move the
      file to the SYS$COMMON:[SYSEXE] directory.
       
      Example: Enter the following commands to make DECnet
      information available clusterwide:
      
$ RENAME SYS$SPECIFIC:[SYSEXE]NETNODE_REMOTE.DAT
SYS$COMMON:[SYSEXE]NETNODE_REMOTE.DAT
 
 
        If your configuration includes multiple system disks, you can set
      up a common NETNODE_REMOTE.DAT file automatically by using the
      following command in SYLOGICALS.COM:
      
$ DEFINE/SYSTEM/EXE NETNODE_REMOTE
ddcu:[directory]NETNODE_REMOTE.DAT
 
 
       
      Notes: Compaq recommends that you set up a common
      NETOBJECT.DAT file clusterwide in the same manner.
        DECdns is used by DECnet--Plus nodes to manage node data (the
      namespace). For DECnet--Plus, Session Control Applications replace
      objects.
      | 
   
  
    | 
      4
     | 
    
      Designate and enable router nodes to support the use of a cluster
      alias. At least one node participating in a cluster alias must be
      configured as a level 1 router.
        +On VAX systems, you can designate a computer as a router node when
      you execute NETCONFIG.COM (as shown in step 1).
        ++On Alpha systems, you might need to enable level 1 routing
      manually because the NETCONFIG.COM procedure does not prompt you with
      the routing question.
        Depending on whether the configuration includes all Alpha nodes or
      a combination of VAX and Alpha nodes, follow these instructions:
       
        
          | IF the cluster consists of...  | 
          THEN...  | 
         
        
          | 
            Alpha nodes only
           | 
          
            You must enable level 1 routing manually (see the example below) on one
            of the Alpha nodes.
           | 
         
        
          | 
            Both Alpha and VAX nodes
           | 
          
             You do not need to enable level 1 routing on an Alpha node if one of
             the VAX nodes is already a routing node.
           | 
         
        
          | 
             
           | 
          
             You do not need to enable the DECnet extended function license DVNETEXT
             on an Alpha node if one of the VAX nodes is already a routing node.
           | 
         
         ++Example: On Alpha systems, if you need to enable level 1 routing on an Alpha node, invoke the NCP utility to do so. For example: $ RUN SYS$SYSTEM:NCP NCP> DEFINE EXECUTOR TYPE ROUTING IV
     ++On Alpha systems, level 1 routing is supported to enable cluster alias operations only.
      | 
   
  
    | 
      5
     | 
    
      Optionally, define a cluster alias. If you want to define a cluster
      alias, invoke the NCP utility to do so. The information you specify
      using these commands is entered in the DECnet permanent executor
      database and takes effect when you start the network.
       
      Example: The following NCP commands establish SOLAR as
      an alias:
      
$ RUN SYS$SYSTEM:NCP
 NCP> DEFINE NODE 2.1 NAME SOLAR
 NCP> DEFINE EXECUTOR ALIAS NODE SOLAR
 NCP> EXIT
 $
 
 
       
      Reference: Section 4.5.8 describes the cluster alias.
      Section 4.5.9 describes how to enable alias operations for other
      computers. See the DECnet--Plus documentation for information about
      setting up a cluster alias on DECnet--Plus nodes.
       
      Note: DECnet for OpenVMS nodes and DECnet--Plus nodes
      cannot share a cluster alias.
      | 
   
 
 
+VAX specific
 
++Alpha specific
 
 
  
  
		 |