Previous | Contents | Index |
While rebooting at the end of the installation procedure, the system
displays messages warning that you must install the operating system
software and the OpenVMS Cluster software license. The OpenVMS Cluster
software supports the OpenVMS License Management Facility (LMF).
License units for clustered systems are allocated on an unlimited
system-use basis.
4.3.1 Guidelines
Be sure to install all OpenVMS Cluster licenses and all licenses for layered products and DECnet as soon as the system is available. Procedures for installing licenses are described in the release notes distributed with the software kit and in the HP OpenVMS License Management Utility Manual. Additional licensing information is described in the respective SPDs.
Use the following guidelines when you install software licenses:
By installing layered products before other nodes are added to the OpenVMS Cluster, the software is installed automatically on new members when they are added to the OpenVMS Cluster system.
Note: For clusters with multiple system disks
(Integrity servers or Alpha) you must perform a separate installation
for each system disk.
4.4.1 Procedure
Table 4-2 describes the actions you take to install layered products on a common system disk.
Phase | Action |
---|---|
Before installation |
Perform one or more of the following steps, as necessary for your
system.
|
Installation | Refer to the appropriate layered-product documentation for product-specific installation information. Perform the installation once for each system disk. |
After installation |
Perform one or more of the following steps, as necessary for your
system.
|
After you have installed the operating system and the required licenses on the first OpenVMS Cluster computer, you can configure and start a satellite booting service. You can use the LANCP utility, or DECnet software, or both.
HP recommends LANCP for booting OpenVMS Cluster satellites. LANCP has shipped with the OpenVMS operating system since Version 6.2. It provides a general-purpose MOP booting service that can be used for booting satellites into an OpenVMS Cluster. (LANCP can service all types of MOP downline load requests, including those from terminal servers, LAN resident printers, and X terminals, and can be used to customize your LAN environment.)
DECnet provides a MOP booting service for booting OpenVMS Cluster satellites, as well as other local and wide area network services, including task-to-task communications for applications.
If you plan to use LANCP in place of DECnet, and you also plan to move from DECnet Phase IV to DECnet--Plus, HP recommends the following order:
|
There are two cluster configuration command procedures, CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM. CLUSTER_CONFIG_LAN.COM uses LANCP to provide MOP services to boot satellites; CLUSTER_CONFIG.COM uses DECnet for the same purpose.
Before choosing LANCP, DECnet, or both, consider the following factors:
Instructions for configuring both LANCP and DECnet are provided in this
section.
4.5.1 Configuring and Starting the LANCP Utility
You can use the LAN Control Program (LANCP) utility to configure a local area network (LAN). You can also use the LANCP utility, in place of DECnet or in addition to DECnet, to provide support for booting satellites in an OpenVMS Cluster and for servicing all types of MOP downline load requests, including those from terminal servers, LAN resident printers, and X terminals.
Reference: For more information about using the LANCP
utility to configure a LAN, see the HP OpenVMS System Manager's Manual, Volume 2: Tuning, Monitoring, and Complex Systems and the HP OpenVMS System Management Utilities Reference Manual: A--L.
4.5.2 Booting Satellite Nodes with LANCP
The LANCP utility provides a general-purpose MOP booting service that can be used for booting satellites into an OpenVMS Cluster. It can also be used to service all types of MOP downline load requests, including those from terminal servers, LAN resident printers, and X terminals. To use LANCP for this purpose, all OpenVMS Cluster nodes must be running OpenVMS Version 6.2 or higher.
The CLUSTER_CONFIG_LAN.COM cluster configuration command procedure uses LANCP in place of DECnet to provide MOP services to boot satellites.
Note: If you plan to use LANCP in place of DECnet, and you also plan to move from DECnet for OpenVMS (Phase IV) to DECnet--Plus, HP recommends the following order:
LANCP uses the following data files:
To use LAN MOP services for satellite booting in new installations, follow these steps:
$ @SYS$STARTUP:LAN$STARTUP |
To migrate from DECnet MOP services to LAN MOP services for satellite booting, follow these steps:
$ MCR LANCP LANCP> LIST DEVICE /MOPDLL %LANCP-I-FNFDEV, File not found, LAN$DEVICE_DATABASE %LANACP-I-CREATDEV, Created LAN$DEVICE_DATABASE file Device Listing, permanent database: --- MOP Downline Load Service Characteristics --- Device State Access Mode Client Data Size ------ ----- ----------- ------ --------- ESA0 Disabled NoExlusive NoKnownClientsOnly 246 bytes FCA0 Disabled NoExlusive NoKnownClientsOnly 246 bytes |
LANCP> DEFINE DEVICE ESA0:/MOP=ENABLE |
$ @SYS$EXAMPLES:LAN$POPULATE 15 LAN$POPULATE - V1.0 Do you want help (Y/N) <N>: LAN$DEFINE.COM has been successfully created. To apply the node definitions to the LANCP permanent database, invoke the created LAN$DEFINE.COM command procedure. HP recommends that you review LAN$DEFINE.COM and remove any obsolete entries prior to executing this command procedure. A total of 2 MOP definitions were entered into LAN$DEFINE.COM |
$ TYPE LAN$DEFINE.COM $ ! $ ! This file was generated by LAN$POPULATE.COM on 16-DEC-1996 09:20:31 $ ! on node CLU21. $ ! $ ! Only DECnet Area 15 was scanned. $ ! $ MCR LANCP Define Node PORK /Address=08-00-2B-39-82-85 /File=APB.EXE - /Root=$21$DKA300:<SYS11.> /Boot_type=Alpha_Satellite Define Node JYPIG /Address=08-00-2B-A2-1F-81 /File=APB.EXE - /Root=$21$DKA300:<SYS10.> /Boot_type=Alpha_Satellite EXIT $ @LAN$DEFINE %LANCP-I-FNFNOD, File not found, LAN$NODE_DATABASE -LANCP-I-CREATNOD, Created LAN$NODE_DATABASE file $ |
$ ! LAN$DEFINE.COM - LAN MOP Client Setup $ ! $ ! This file was generated by LAN$POPULATE.COM at 8-DEC-1996 14:28:43.31 $ ! on node BIGBOX. $ ! $ SET NOON $ WRITE SYS$OUTPUT "Setting up MOP DLL clients in LANCP... $ MCR LANCP SET NODE SLIDER /ADDRESS=08-00-2B-12-D8-72/ROOT=BIGBOX$DKB0:<SYS10.>/BOOT_TYP E=VAX_satellite/FILE=NISCS_LOAD.EXE DEFINE NODE SLIDER /ADDRESS=08-00-2B-12-D8-72/ROOT=BIGBOX$DKB0:<SYS10.>/BOOT_TYP E=VAX_satellite/FILE=NISCS_LOAD.EXE EXIT $ ! $ WRITE SYS$OUTPUT "DECnet Phase V to LAN MOPDLL client migration complete!" $ EXIT |
$ ! LAN$DECNET_MOP_CLEANUP.COM - DECnet MOP Client Cleanup $ ! $ ! This file was generated by LAN$POPULATE.COM at 8-DEC-1995 14:28:43.47 $ ! on node BIGBOX. $ ! $ SET NOON $ WRITE SYS$OUTPUT "Removing MOP DLL clients from DECnet database..." $ MCR NCL DELETE NODE 0 MOP CLIENT SLIDER EXIT $ ! $ WRITE SYS$OUTPUT "DECnet Phase V MOPDLL client cleanup complete!" $ EXIT |
$ @SYS$STARTUP:LAN$STARTUP %RUN-S-PROC_ID, identification of created process is 2920009B $ |
$ @SYS$STARTUP:LAN$STARTUP |
For more information about the LANCP utility, see the HP OpenVMS System Manager's Manual and
the HP OpenVMS System Management Utilities Reference Manual.
4.5.6 Configuring DECnet
The process of configuring the DECnet network typically entails several operations, as shown in Table 4-3. An OpenVMS Cluster running both implementations of DECnet requires a system disk for DECnet for OpenVMS (Phase IV) and another system disk for DECnet--Plus (Phase V).
Note: DECnet for OpenVMS implements Phase IV of Digital Network Architecture (DNA). DECnet--Plus implements Phase V of DNA. The following discussions are specific to the DECnet for OpenVMS product.
Reference: Refer to the DECnet--Plus documentation for equivalent DECnet--Plus configuration information.
Step | Action |
---|---|
1 |
Log in as system manager and execute the NETCONFIG.COM command
procedure as shown. Enter information about your node when prompted.
Note that DECnet--Plus nodes execute the NET$CONFIGURE.COM command
procedure.
Reference: See the DECnet for OpenVMS or the DECnet--Plus documentation, as appropriate, for examples of these procedures. |
2 |
When a node uses multiple LAN adapter connections to the same LAN and
also uses DECnet for communications, you must
disable DECnet use of all but one of the LAN devices.
To do this, remove all but one of the lines and circuits associated with the adapters connected to the same LAN or extended LAN from the DECnet configuration database after the NETCONFIG.COM procedure is run.
For example, issue the following commands to invoke NCP and disable
DECnet use of the LAN device XQB0:
References: See Guidelines for OpenVMS Cluster Configurations for more information about distributing connections to LAN segments in OpenVMS Cluster configurations. See the DECnet--Plus documentation for information about removing routing circuits associated with all but one LAN adapter. (Note that the LAN adapter issue is not a problem if the DECnet--Plus node uses extended addressing and does not have any Phase IV compatible addressing in use on any of the routing circuits.) |
3 |
Make remote node data available clusterwide. NETCONFIG.COM creates in
the SYS$SPECIFIC:[SYSEXE] directory the permanent remote-node database
file NETNODE_REMOTE.DAT, in which remote-node data is maintained. To
make this data available throughout the OpenVMS Cluster, you move the
file to the SYS$COMMON:[SYSEXE] directory.
Example: Enter the following commands to make DECnet
information available clusterwide:
If your configuration includes multiple system disks, you can set
up a common NETNODE_REMOTE.DAT file automatically by using the
following command in SYLOGICALS.COM:
Notes: HP recommends that you set up a common NETOBJECT.DAT file clusterwide in the same manner. DECdns is used by DECnet--Plus nodes to manage node data (the namespace). For DECnet--Plus, Session Control Applications replace objects. |
4 |
Designate and enable router nodes to support the use of a cluster
alias. At least one node participating in a cluster alias must be
configured as a level 1 router.
On Integrity servers and Alpha systems, you might need to enable level 1 routing manually because the NETCONFIG.COM procedure does not prompt you with the routing question. Depending on whether the configuration includes a combination of Integrity sever nodes and Alpha nodes, you must enable level 1 routing manually (see the example below) on one of the Alpha nodes.
Example: On Alpha systems, if you need to enable level
1 routing on Alpha node, invoke the NCP utility to do so. For example:
Note: On Integrity servers and Alpha systems, level 1 routing is supported to enable cluster alias operations only. |
5 |
Optionally, define a cluster alias. If you want to define a cluster
alias, invoke the NCP utility to do so. The information you specify
using these commands is entered in the DECnet permanent executor
database and takes effect when you start the network.
Example: The following NCP commands establish SOLAR as
an alias:
Reference: Section 4.5.8 describes the cluster alias. Section 4.5.9 describes how to enable alias operations for other computers. See the DECnet--Plus documentation for information about setting up a cluster alias on DECnet--Plus nodes. Note: DECnet for OpenVMS nodes and DECnet--Plus nodes cannot share a cluster alias. |
Previous | Next | Contents | Index |