![]() |
![]() HP OpenVMS Systems Documentation |
![]() |
DECnet-Plus for OpenVMS
|
Previous | Contents | Index |
$ @sys$update autogen getdata reboot nofeedback |
By default, a cluster satellite configures its Phase IV Prefix as 49:: and its node synonym directory as .DNA_Nodesynonym. Some clusters may want to have different values for one or both of these attributes. To change these defaults for satellites added to the cluster, define the following logicals in SYS$COMMON:[SYSMGR]NET$LOGICALS.COM before running CLUSTER_CONFIG.
$ define/system/nolog net$phaseiv_prefix "<prefix value>" $ define/system/nolog decnet_migrate_dir_synonym "<synonym dir>" |
To change these values for a satellite that has already been
configured, run NET$CONFIGURE from that satellite.
9.1.4 Customizing Your MOP Client Database for Multiple Boot Nodes
By default, the file NET$MOP_CLIENT_STARTUP.NCL resides in SYS$SYSROOT:[SYSMGR]. In this location, however, the MOP client information is only available to the node on which the file resides. It is up to the system manager to make that information available to more boot nodes, if desired.
Both CLUSTER_CONFIG.COM and NET$CONFIGURE.COM modify the file SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL for the node on which the procedure is invoked. If the file is found in SYS$SYSROOT:[SYSMGR], it is modified and left in that location. Similarly, if the file is found in SYS$COMMON:[SYSMGR], it is modified and left in that location.
One way of allowing more boot nodes to access NET$MOP_CLIENT_STARTUP.NCL is to move it to SYS$COMMON:[SYSMGR]NET$MOP_CLIENT_STARTUP.NCL. All nodes in the OpenVMS Cluster then have access to it.
Alternatively, you can create one file for common MOP client
information. Designated boot nodes can execute this file by placing
@ncl_script_name in their own
SYS$MANAGER:NET$MOP_CLIENT_STARTUP.NCL file. This method
requires more work by the system manager, however, because the
configuration procedures does not modify the common file directly.
9.2 Using an OpenVMS Cluster Alias
All or some nodes in an OpenVMS Cluster environment can be represented in the network as a single node by establishing an alias for the OpenVMS Cluster. To the rest of the network, an alias node looks like a normal node. It has a normal node object entry in the namespace, which provides a standard address tower. The alias has a single DECnet address that represents the OpenVMS Cluster environment as a whole. The alias allows access to common resources on the OpenVMS Cluster environment without knowing which nodes comprise the OpenVMS Cluster.
Using an alias never precludes using an individual node name and address. Thus, a remote node can address the OpenVMS Cluster as a single node, as well as address any OpenVMS Cluster member individually.
You decide which nodes participate in an alias. It is not necessary for every member of an OpenVMS Cluster environment to be part of the alias. Those nodes in the OpenVMS Cluster environment that have specifically joined the alias comprise the alias members, and connections addressed to the alias are distributed among these members. You can also have multiple aliases. Multiple aliases allow end nodes to be members of more than one alias. Multiple aliases also allow a mixed architecture cluster. You can have one alias for all the nodes, one for Alpha systems, and another for the VAX systems.
You can have a maximum of three aliases. Members of the same alias must be members of the same OpenVMS Cluster environment. Nodes joining the same alias must be in the same DECnet area.
When creating multiple aliases, the first alias created is used for outgoing connections for any applications, with the outgoing alias attribute set to TRUE. If this alias is not enabled, the local node name is used for the outgoing connection.
Finally, nodes that assume the alias should have a common authorization file.
There must be at least one adjacent DECnet Phase V router on a LAN to support an OpenVMS Cluster alias. A single router can support multiple OpenVMS Cluster environments on a LAN. Providing alias support does not prevent a router from providing normal routing support. OpenVMS Cluster environments do not have routers. If all nodes on a LAN that form a complete network are DECnet Phase V end nodes, no router is required. Any member of the OpenVMS Cluster can communicate with any system on the LAN. If, however, the LAN is part of a larger network or there are Phase IV nodes on the LAN, there must be at least one adjacent DECnet Phase V router on the LAN. The adjacent DECnet Phase V router allows members of the cluster to communicate with Phase IV nodes or systems in the larger network beyond the LAN. |
To add a node in an OpenVMS Cluster environment to the alias, use the NET$CONFIGURE.COM procedure. For information about NET$CONFIGURE.COM, refer to the DECnet-Plus for OpenVMS Applications Installation and Advanced Configuration guide.
You must run NET$CONFIGURE.COM on each node in the OpenVMS Cluster environment that you want to become a member of the alias. |
Before an alias can be identified by name, you must create a node object entry for it in the namespace. Do this only once for each OpenVMS Cluster.
To add an object entry for an OpenVMS Cluster alias in a DECnet Phase V area, you need:
The decnet_register tool converts a Phase IV-style address of the form area.node into a 6-byte address when registering a Phase IV node (see Section 5.3.4 and Chapter 5 for decnet_register). (In Phase IV, an area has a value in the range of 1--63, and a node has a value in the range of 1--1023. For example, 63.135.) The converted 6-byte address has the form AA-00-04-00-87-FC.
If you are converting an existing Phase IV OpenVMS Cluster to DECnet Phase V, use the existing Phase IV alias address for the Node ID when configuring and registering the alias. If you are installing a new OpenVMS Cluster in a DECnet Phase V network, use any Phase IV-style address that is unique to your network for the node ID when configuring and registering the alias.
The node ID you use when registering your alias in the namespace must be the same Node ID you use when configuring the alias module using NET$CONFIGURE. |
If you want to set an outgoing alias for particular nodes in an OpenVMS Cluster, use the following command:
ncl> set alias port port-name outgoing default true |
If you want to set an outgoing alias for an application, use the following command:
ncl> set session control application application-name - _ncl> outgoing alias name alias-name |
If you do not set application outgoing alias name and the application has the outgoing alias set to true, the alias name for which you set alias port outgoing default true is used.
If you define application outgoing alias name, this supersedes the setting of alias port outgoing default. If the application outgoing alias name is not enabled, the local node name is used.
If neither alias port outgoing default nor application
outgoing alias name is set, the first alias created is used as the
default for the system. If this alias is not enabled, the local node
name is used.
9.2.4 Controlling Connect Requests to the OpenVMS Cluster Alias
When a node tries to connect to an alias node, it does not know that its destination is an alias. It consults the namespace to translate the alias node name into an address, and uses the address to send data packets to the alias. Data packets can arrive at any node that is an alias member. When a node in the alias receives a request for a connection to the alias, that node selects a member node (possibly itself) to own the connection.
The node makes its selection based on the following criteria:
Once an eligible node is selected, the incoming connect request is forwarded to that node, and the connection is established.
Each connection to the alias is associated with one node, which is a member of the alias. If there is a problem with that node, the connection is lost. It is not transferred to another node in the alias. |
If your node is in an OpenVMS Cluster environment using an alias, you can specify which network applications will use incoming and outgoing connections in the application database. If you are using the defaults as specified by DIGITAL for the applications that are supplied with DECnet-Plus, the default is that only the MAIL application is associated with the alias (for outgoing connections). If other applications have been added to the database (such as Rdb, DQS, or your supplied application), outgoing alias for the objects associated with those applications can be enabled.
If you converted from Phase IV to Phase V (or added or changed objects prior to installing DECnet-Plus), the objects will not change back to the defaults.
When MAIL is associated with the alias, MAIL effectively treats the OpenVMS Cluster as a single node. Ordinarily, replies to mail messages are directed to the node that originated the message; the reply is not delivered if that node is not available. If the node is in an OpenVMS Cluster and uses the OpenVMS Cluster alias, an outgoing mail message is identified by the alias node address rather than the individual address of the originating node. An incoming reply directed to the alias address is given to any active node in the OpenVMS Cluster and is delivered to the originator's mail file.
The alias permits you to set a proxy to a remote node for the whole OpenVMS Cluster rather than for each node in the OpenVMS Cluster. The proxy for the OpenVMS Cluster system can be useful if the alias node address is used for outgoing connections originated by the application file access listener FAL, which accesses the file system.
Also, do not allow applications, whose resources are not accessible clusterwide, to receive incoming connect requests directed to the alias node address. All processors in the OpenVMS Cluster must be able to access and share all resources (such as files and devices). For more information about sharing files in an OpenVMS Cluster environment, see Section 9.3.
The following example configures a session control application entity to enable or disable incoming or outgoing connect requests. Refer to DECnet-Plus Network Control Language Reference for more information about these attributes.
ncl> create session control ncl> enable session control ncl> create session control application mail ncl> create session control application foo ncl> set session control application mail - _ncl> outgoing alias true (1) ncl> set session control application foo - _ncl> incoming alias false (2) ncl> enable session control application mail ncl> enable session control application foo |
Section F.2 provides more examples of setting up a session
control application entity.
9.2.4.2 Controlling the Number of Connections Allowed for an Alias
The number of connections allowed for an alias equals the number of
connections you have specified with the nsp maximum transport
connections or osi maximum transport connections
characteristic. For more information about configuring the NSP and OSI
transports, refer to the DECnet-Plus for OpenVMS Applications Installation and Advanced Configuration guide.
9.3 Sharing Network Applications in an OpenVMS Cluster Environment
If your OpenVMS Cluster environment participates in a DECnet Phase V network, you must decide if you want nodes in the OpenVMS Cluster to share common network definitions for such items as network applications. Sharing common network definitions simplifies updates. You change only the shared definitions rather than changing definitions for each member of the OpenVMS Cluster. To share files, copy the following script files from SYS$SPECIFIC:[SYSMGR] (where they are normally created) to SYS$COMMON:[SYSMGR]:
If you do not want certain files shared, keep them in SYS$SPECIFIC:[SYSMGR]. Keep communication-specific startup scripts, such as the following, that contain hardware-specific information in SYS$SPECIFIC:[SYSMGR]:
If the application database is identical on every node in an OpenVMS Cluster environment, you can share those common definitions among all nodes in the OpenVMS Cluster by issuing the following commands:
$ rename sys$specific:[sysmgr]net$application_startup.ncl - _$ sys$common:[sysmgr]net$application_startup.ncl |
$ copy sys$specific:[sysmgr]net$application_startup.ncl - _$ sys$common:[sysmgr]net$application_startup.ncl |
$ delete sys$specific:[sysmgr]net$application_startup.ncl;* |
$ rename sys$specific:[sysmgr]net$application_startup.ncl;* - _$ sys$specific:[sysmgr]net$application_startup_old.ncl;* |
A system running DECnet-Plus software can act as a host system that performs the following services for remote client systems:
The Maintenance Operations Protocol (MOP) module allows you to do these tasks. You can downline load or upline dump DECnet Phase IV or Phase V nodes. Table 10-1 lists the data links that MOP supports and the supported functions for those links.
CSMA-CD and FDDI IEEE 802.3 LAN |
CSMA-CD Ethernet LAN |
HDLC | Synchronous DDCMP |
LAPB |
---|---|---|---|---|
Loop requester | Loop requester | Loop requester | Loop requester | Loop requester |
Console requester | Console requester | Console requester | Console requester | |
Dump server | Dump server | Dump server | Dump server | |
Load server | Load server | Load server | Load server | |
Configuration monitor | Configuration monitor | |||
Console carrier | Console carrier | |||
Query requester | ||||
Test requester |
10.1 Automatically Configuring MOP
You can automatically set up a basic MOP configuration by running the
network configuration procedure. For more information about the
configuration procedure, refer to your installation and configuration
guides.
Either the configuration procedure creates MOP scripts from the information you supply to the prompts or the software provides a permanent client database:
If you start MOP (see Section 10.3) after running your configuration procedure, MOP can do its various tasks provided that you supply all necessary attributes in the NCL command or in the passive load request. This includes:
To downline load an OpenVMS Cluster satellite or to store information about downline loads to specific network servers, you can use the mop client database. The mop client database stores information, so you do not have to enter it every time you issue an NCL command.
The mop client database is the NET$MOP_CLIENT_STARTUP.NCL script file. You can add information to or delete information from it with the NET$CONFIGURE.COM Option 8, "Configure MOP Client database." If you want to add a client, the procedure prompts you for information about the client such as the following:
To automatically configure an OpenVMS Cluster satellite, use the cluster configuration command procedure (CLUSTER_CONFIG.COM). For more information about CLUSTER_CONFIG.COM, refer to the OpenVMS Cluster Systems for OpenVMS guide. |
Previous | Next | Contents | Index |