Previous | Contents | Index |
To control queues, you use one or several queue managers to maintain a clusterwide queue database that stores information about queues and jobs.
Reference: For detailed information about setting up OpenVMS Cluster queues, see Chapter 7.
This chapter provides an overview of various types of OpenVMS Cluster configurations and the ways they are interconnected.
For definitive information about supported OpenVMS Cluster configurations, see:
Every node in an OpenVMS Cluster must have direct connections to all other nodes. Sites can choose to use one or more of the following interconnects:
Processing needs and available hardware resources determine how individual OpenVMS Cluster systems are configured. The configuration discussions in this chapter are based on these physical interconnects.
You can use bridges or switches to connect the OpenVMS Integrity server nodes Fast Ethernet/Gigabit Ethernet NIC(s) to any intersite interconnect the WAN supplier provides, such as [D]WDM, Gigabit Ethernet, Fibre Channel or others.
Multihost shared storage on a SCSI interconnect, commonly known as SCSI clusters, is not supported. It is also not supported on OpenVMS Alpha systems for newer SCSI adapters. However, multihost shared storage on industry-standard Fibre Channel is supported. Locally attached storage, on both OpenVMS Alpha systems (FC or SCSI storage) and OpenVMS Integrity server systems (Fibre Channel, SAS, or SCSI storage), can be served to any other member of the cluster. |
All Ethernet interconnects are industry-standard local area
networks that are generally shared by a wide variety of network
consumers. When OpenVMS Cluster systems are based on LAN, cluster
communications are carried out by a port driver (PEDRIVER) that
emulates CI port functions.
3.2.1 Design
The OpenVMS Cluster software is designed to use the Ethernet and
interconnects simultaneously with the DECnet, TCP/IP, and SCS
protocols. This is accomplished by allowing LAN data link software to
control the hardware port. This software provides a multiplexing
function so that the cluster protocols are simply another user of a
shared hardware resource. See Figure 2-1 for an illustration of this
concept.
3.2.1.1 PEDRIVER Fast Path Support
PEdriver, the software that enables OpenVMS Cluster communications over a LAN, also provides Fast Path support. This PEdriver feature provides the following benefits:
For more detailed information, see the HP OpenVMS I/O User's Reference Manual, the HP OpenVMS System Manager's Manual, and the HP OpenVMS System Management Utilities Reference Manual.
3.2.2 Cluster Group Numbers and Cluster Passwords
A single LAN can support multiple LAN-based OpenVMS Cluster systems.
Each OpenVMS Cluster is identified and secured by a unique cluster
group number and a cluster password. Chapter 2 describes cluster
group numbers and cluster passwords in detail.
3.2.3 Servers
OpenVMS Cluster computers interconnected by a LAN are generally configured as either servers or satellites. The following table describes servers.
Satellites are computers without a local system disk. Generally, satellites are consumers of cluster resources, although they can also provide facilities for disk serving, tape serving, and batch processing. If satellites are equipped with local disks, they can enhance performance by using such local disks for paging and swapping.
Satellites are booted remotely from a boot server (or from a MOP server
and a disk server) serving the system disk. Section 3.2.5 describes MOP
and disk server functions during satellite booting.
3.2.5 Satellite Booting (Alpha)
When a satellite requests an operating system load, a MOP server for the appropriate OpenVMS Alpha operating system sends a bootstrap image to the satellite that allows the satellite to load the rest of the operating system from a disk server and join the cluster. The sequence of actions during booting is described in Table 3-1.
Step | Action | Comments |
---|---|---|
1 | Satellite requests MOP service. | This is the original boot request that a satellite sends out across the network. Any node in the OpenVMS Cluster that has MOP service enabled and has the LAN address of the particular satellite node in its database can become the MOP server for the satellite. |
2 | MOP server loads the Alpha system. |
The MOP server responds to an Alpha satellite boot request by downline
loading the SYS$SYSTEM:APB.EXE program along with the required
parameters.
For Alpha computers, Some of these parameters include:
|
3 | Satellite finds additional parameters located on the system disk and root. | The satellite finds OpenVMS Cluster system parameters, such as SCSSYSTEMID, SCSNODE, and NISCS_CONV_BOOT. The satellite also finds the cluster group code and password. |
4 | Satellite executes the load program | The program establishes an SCS connection to a disk server for the satellite system disk and loads the SYSBOOT.EXE program. |
Configuring and starting a satellite booting service for Alpha
computers is described in detail in Section 4.5.
3.2.6 Satellite Booting (Integrity servers)
Configuring and starting a satellite booting service for Integrity
server systems is described in detail in Section 4.5.
3.2.7 Configuring Multiple LAN Adapters
LAN support for multiple adapters allows PEDRIVER (the port driver for
the LAN) to establish more than one channel between the local and
remote cluster nodes. A channel is a network path
between two nodes that is represented by a pair of LAN adapters.
3.2.7.1 System Characteristics
OpenVMS Cluster systems with multiple LAN adapters have the following characteristics:
Configurations for OpenVMS Cluster systems with multiple LAN adapters must meet the following requirements:
Rule: For each node, DECnet for OpenVMS (Phase IV) and
MOP serving (Alpha or VAX, as appropriate) can be performed by only one
adapter per extended LAN to prevent LAN address duplication.
3.2.7.3 Guidelines
The following guidelines are for configuring OpenVMS Cluster systems with multiple LAN adapters. If you configure these systems according to the guidelines, server nodes (nodes serving disks, tape, and lock traffic) can typically use some of the additional bandwidth provided by the added LAN adapters and increase the overall performance of the cluster. However, the performance increase depends on the configuration of your cluster and the applications it supports.
Configurations with multiple LAN adapters should follow these guidelines:
3.2.8 LAN Examples
Figure 3-1 shows an OpenVMS Cluster system based on a LAN
interconnect with a single Alpha server node and a single Alpha system
disk.
Figure 3-1 LAN OpenVMS Cluster System with Single Server Node and System Disk
In Figure 3-1, the server node (and its system disk) is a single
point of failure. If the server node fails, the satellite nodes cannot
access any of the shared disks including the system disk. Note that
some of the satellite nodes have locally connected disks. If you
convert one or more of these into system disks, satellite nodes can
boot from their own local system disk.
3.2.9 Fast Path for LAN Devices
With OpenVMS Version 7.3-2, further enhancements have been made to Fast Path for LAN devices, which will continue to help streamline I/O processing and improve symmetric-multiprocessing (SMP) performance scalability on newer AlphaServer systems. Enhancements include:
These features enhance the Fast Path functionality that already exist in LAN drivers. The enhanced functionality includes additional optimizations, preallocating of resources, and providing an optimized code path for mainline code.
For more information, see the HP OpenVMS I/O User's Reference
Manual
3.2.10 LAN Bridge Failover Process
The following table describes how the bridge parameter settings can affect the failover process.
Option | Comments |
---|---|
Decreasing the LISTEN_TIME value allows the bridge to detect topology changes more quickly. | If you reduce the LISTEN_TIME parameter value, you should also decrease the value for the HELLO_INTERVAL bridge parameter according to the bridge-specific guidelines. However, note that decreasing the value for the HELLO_INTERVAL parameter causes an increase in network traffic. |
Decreasing the FORWARDING_DELAY value can cause the bridge to forward packets unnecessarily to the other LAN segment. | Unnecessary forwarding can temporarily cause more traffic on both LAN segments until the bridge software determines which LAN address is on each side of the bridge. |
Note: If you change a parameter on one LAN bridge, you
should change that parameter on all bridges to ensure that selection of
a new root bridge does not change the value of the parameter. The
actual parameter value the bridge uses is the value specified by the
root bridge.
3.2.11 Virtual LAN Support in OpenVMS
Virtual LAN (VLAN) is a mechanism for segmenting a LAN broadcast domain into smaller sections. The IEEE 802.1Q specification defines the operation and behavior of a VLAN. The OpenVMS implementation adds IEEE 802.1Q support to selected OpenVMS LAN drivers so that OpenVMS can now route VLAN tagged packets to LAN applications using a single LAN adapter.
You can use VLAN to do the following:
In OpenVMS, VLAN presents a virtual LAN device to LAN applications. The virtual LAN device associates a single IEE 802.1Q tag with communications over a physical LAN device. The virtual device provides the ability to run any LAN application (for example, SCA, DECnet, TCP/IP, or LAT) over a physical LAN device, allowing host-to-host communications as shown in Figure 3-2.
DECnet-Plus and DECnet Phase IV can be configured to run over a VLAN device. |
Figure 3-2 Virtual LAN
OpenVMS VLAN has been implemented through a new driver, SYS$VLANDRIVER.EXE, which provides the virtual LAN devices. Also, existing LAN drivers have been updated to handle VLAN tags. LANCP.EXE and LANACP.EXE have been updated with the ability to create and deactivate VLAN devices and to display status and configuration information.
The OpenVMS VLAN subsystem was designed with particular attention to performance. Thus, the performance cost of using VLAN support is negligible.
When configuring VLAN devices, remember that VLAN devices share the
same locking mechanism as the physical LAN device. For example, running
OpenVMS cluster protocol on a VLAN device along with the underlying
physical LAN device does not result in increased benefit and might, in
fact, hinder performance.
3.2.11.2 VLAN Support Details
All supported Gigabit and 10-Gb (Integrity servers-only) LAN devices
are capable of handling VLAN traffic on Alpha and Integrity server
systems.
The following list describes additional details of VLAN-related support:
Figure 3-3 LAN Failover Support
3.3 Cluster over IP
OpenVMS Version 8.4 has been enhanced with the Cluster over IP
(Internet Protocol) feature. Cluster over IP provides the ability to
form clusters beyond a single LAN or VLAN segment using industry
standard Internet Protocol. This feature provides improved disaster
tolerant capability.
System managers also have the ability to manage or monitor OpenVMS cluster that uses IP for cluster communication using SCACP management utility.
Cluster protocol (SCS also known as SCA) over LAN is provided by Port Emulator driver (PEDRIVER). PEDRIVER uses User Datagram Protocol (UDP) and IP in addition to directly using 802.3 interfacing with LAN for cluster communication as shown in Figure 1-0. The datagram characteristics of UDP combined with PEDRIVER's inbuilt reliable delivery mechanism is used for transporting cluster messages which is used by SYSAP (system level application) to communicate between two cluster nodes.
Cluster over IP is an optional feature that can be enabled in addition to the traditional LAN based communication. However, if both LAN and IP mode of communication exist between nodes in a cluster, PEDRIVER prefers LAN communication instead of IP.
OpenVMS Cluster over IP and IP Cluster Interconnect (IPCI) terms are interchangeably used in the document and refers to using TCP/IP stack for cluster communication. |
Cluster over IP solution is an integration of the following:
Figure 3-4 shows the cluster over IP architecture.
Figure 3-4 Cluster Communication Design Using IP
This consists of enhancing PEdriver to use the IP UDP protocol. Some of the features of this solution include:
To ensure that cluster communication is available in an IP only network environment, it is essential to have TCP/IP stack loaded when the cluster formation starts. This also retains the existing functionality of cluster formation of OpenVMS clusters. Normal booting sequence includes loading of LAN drivers followed by PEDRIVER. TCP/IP drivers are loaded when TCP/IP services are started. If cluster over IP is enabled, LAN, TCP/IP excelets, and PEDRIVER are loaded sequentially. Once the system comes up, TCP/IP services can be started to use other TCP/IP components, such as TELNET, FTP and so on.
Ensure that the TCP/IP software is configured before configuring cluster over IP. To ensure that network and TCP/IP is configured properly, use the PING utility and ping the node from outside the subnet. |
Previous | Next | Contents | Index |