HP OpenVMS Systems Documentation |
OpenVMS Cluster Systems
E.4.1 StatusIf successful, the SYS$LAVC_DEFINE_NET_COMPONENT subroutine creates a COMP data structure and returns its ID value. This subroutine copies user-specified parameters into the data structure and sets the reference count to zero.
The component ID value is a 32-bit value that has a one-to-one
association with a network component. Lists of these component IDs are
passed to SYS$LAVC_DEFINE_NET_PATH to specify the components used when
a packet travels from one node to another.
SYS$LAVC_DEFINE_NET_COMPONENT can return the error condition codes shown in the following table.
E.5 Creating a Network Component ListThe SYS$LAVC_DEFINE_NET_PATH subroutine creates a directed list of network components between two network nodes. A directed list is a list of all the components through which a packet passes as it travels from the failure analysis node to other nodes in the cluster network. Use the following format to specify the parameters:
STATUS = SYS$LAVC_DEFINE_NET_PATH (
Table E-5 describes the SYS$LAVC_DEFINE_NET_PATH parameters.
E.5.1 StatusThis subroutine creates a directed list of network components that describe a specific network path. If SYS$LAVC_DEFINE_NET_PATH is successful, it creates a CLST data structure. If one node is the local node, then this data structure is associated with a PEDRIVER channel. In addition, the reference count for each network component in the list is incremented. If neither node is the local node, then the used_for_analysis_status address contains an error status. The SYS$LAVC_DEFINE_NET_PATH subroutine returns a status value in register R0, as described in Table E-6, indicating whether the network component list has the correct construction.
E.5.2 Error MessagesSYS$LAVC_DEFINE_NET_PATH can return the error condition codes shown in the following table.
E.6 Starting Network Component Failure AnalysisThe SYS$LAVC_ENABLE_ANALYSIS subroutine starts the network component failure analysis. Example: The following is an example of using the SYS$LAVC_ENABLE_ANALYSIS subroutine:
E.6.1 StatusThis subroutine attempts to enable the network component failure analysis code. The attempt will succeed if at least one component list is defined.
SYS$LAVC_ENABLE_ANALYSIS returns a status in register R0.
SYS$LAVC_ENABLE_ANALYSIS can return the error condition codes shown in the following table.
Example: The following is an example of using SYS$LAVC_DISABLE_ANALYSIS:
This subroutine disables the network component failure analysis code
and, if analysis was enabled, deletes all the network component
definitions and network component list data structures from nonpaged
pool.
SYS$LAVC_DISABLE_ANALYSIS returns a status in register R0.
SYS$LAVC_DISABLE_ANALYSIS can return the error condition codes shown in the following table.
Appendix F
|
Additional troubleshooting information specific to the revised PEDRIVER is planned for the next revision of this manual. |
The NISCA protocol is an implementation of the Port-to-Port Driver
(PPD) protocol of the SCA.
F.1.1 SCA Protocols
As described in Chapter 2, the SCA is a software architecture that provides efficient communication services to low-level distributed applications (for example, device drivers, file services, network managers).
The SCA specifies a number of protocols for OpenVMS Cluster systems, including System Applications (SYSAP), System Communications Services (SCS), the Port-to-Port Driver (PPD), and the Physical Interconnect (PI) of the device driver and LAN adapter. Figure F-1 shows these protocols as interdependent levels that make up the SCA architecture. Figure F-1 shows the NISCA protocol as a particular implementation of the PPD layer of the SCA architecture.
Figure F-1 Protocols in the SCA Architecture
Table F-1 describes the levels of the SCA protocol shown in Figure F-1.
Protocol | Description | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SYSAP | Represents clusterwide system applications that execute on each node. These system applications share communication paths in order to send messages between nodes. Examples of system applications are disk class drivers (such as DUDRIVER), the MSCP server, and the connection manager. | ||||||||||||||||||||||||
SCS | Manages connections around the OpenVMS Cluster and multiplexes messages between system applications over a common transport called a virtual circuit (see Section F.1.2). The SCS layer also notifies individual system applications when a connection fails so that they can respond appropriately. For example, an SCS notification might trigger DUDRIVER to fail over a disk, trigger a cluster state transition, or notify the connection manager to start timing reconnect (RECNXINTERVAL) intervals. | ||||||||||||||||||||||||
PPD |
Provides a message delivery service to other nodes in the OpenVMS
Cluster system.
|
||||||||||||||||||||||||
PI | Provides connections to LAN devices. PI represents LAN drivers and adapters over which packets are sent and received. |
F.1.2 Paths Used for Communication
The NISCA protocol controls communications over the paths described in
Table F-2.
Path | Description |
---|---|
Virtual circuit |
A common transport that provides reliable port-to-port communication
between OpenVMS Cluster nodes in order to:
The virtual circuit descriptor table in each port indicates the status of it's port-to-port circuits. After a virtual circuit is formed between two ports, communication can be established between SYSAPs in the nodes. |
Channel | A logical communication path between two LAN adapters located on different nodes. Channels between nodes are determined by the pairs of adapters and the connecting network. For example, two nodes, each having two adapters, could establish four channels. The messages carried by a particular virtual circuit can be sent over any of the channels connecting the two nodes. |
Note: The difference between a channel and a virtual
circuit is that channels provide a path for datagram service. Virtual
circuits, layered on channels, provide an error-free path between
nodes. Multiple channels can exist between nodes in an OpenVMS Cluster
but only one virtual circuit can exist between any two nodes at a time.
F.1.3 PEDRIVER
The port emulator driver, PEDRIVER, implements the NISCA protocol and establishes and controls channels for communication between local and remote LAN ports.
PEDRIVER implements a packet delivery service (at the TR level of the NISCA protocol) that guarantees the sequential delivery of messages. The messages carried by a particular virtual circuit can be sent over any of the channels connecting two nodes. The choice of channel is determined by the sender (PEDRIVER) of the message. Because a node sending a message can choose any channel, PEDRIVER, as a receiver, must be prepared to receive messages over any channel.
At any point in time, the TR level makes use of a single "preferred channel" to carry the traffic for a particular virtual circuit.
Reference: See Appendix G for more information about how transmit channels are selected.
F.2 Addressing LAN Communication Problems
This section describes LAN Communication Problems and how to address
them.
F.2.1 Symptoms
Communication trouble in OpenVMS Cluster systems may be indicated by symptoms such as the following:
Before you initiate complex diagnostic procedures, do not overlook the
obvious. Always make sure the hardware is configured and connected
properly and that the network is started. Also, make sure system
parameters are set correctly on all nodes in the OpenVMS Cluster.
F.2.2 Traffic Control
Keep in mind that an OpenVMS Cluster system generates substantially heavier traffic than other LAN protocols. In many cases, cluster behavior problems that appear to be related to the network might actually be related to software, hardware, or user errors. For example, a large amount of traffic does not necessarily indicate a problem with the OpenVMS Cluster network. The amount of traffic generated depends on how the users utilize the system and the way that the OpenVMS Cluster is configured with additional interconnects (such as DSSI and CI).
If the amount of traffic generated by the OpenVMS Cluster exceeds the expected or desired levels, then you might be able to reduce the level of traffic by:
Previous | Next | Contents | Index |