 |
Guidelines for OpenVMS Cluster Configurations
A.6.1 Step 1: Meet SCSI Grounding Requirements
You must ensure that your electrical power distribution systems meet
local requirements (for example, electrical codes) prior to installing
your OpenVMS Cluster system. If your configuration consists of two or
more enclosures connected by a common SCSI interconnect, you must also
ensure that the enclosures are properly grounded. Proper grounding is
important for safety reasons and to ensure the proper functioning of
the SCSI interconnect.
Electrical work should be done by a qualified professional.
Section A.7.8 includes details of the grounding requirements for SCSI
systems.
A.6.2 Step 2: Configure SCSI Node IDs
This section describes how to configure SCSI node and device IDs. SCSI
IDs must be assigned separately for multihost SCSI buses and
single-host SCSI buses.
Figure A-12 shows two hosts; each one is configured with a
single-host SCSI bus and shares a multihost SCSI bus. (See
Figure A-1 for the key to the symbols used in this figure.)
Figure A-12 Setting Allocation Classes for SCSI Access
The following sections describe how IDs are assigned in this type of
multihost SCSI configuration. For more information about this topic,
see OpenVMS Cluster Systems.
A.6.2.1 Configuring Device IDs on Multihost SCSI Buses
When configuring multihost SCSI buses, adhere to the following rules:
- Set each host adapter on the multihost bus to a different ID.
Start by assigning ID 7, then ID 6, and so on, using decreasing ID
numbers.
If a host has two multihost SCSI buses, allocate an ID to
each SCSI adapter separately. There is no requirement that you set the
adapters to the same ID, although using the same ID may simplify
configuration management. ( Section A.6.4 describes how to set host IDs
for the internal adapter using SCSI console parameters.)
- When assigning IDs to devices and storage controllers connected to
multihost SCSI buses, start at ID 0 (zero), assigning the highest ID
numbers to the disks that require the fastest I/O response time.
- Devices connected to a multihost SCSI bus must have the same name
as viewed from each host. To achieve this, you must do one of the
following:
- Ensure that all hosts connected to a multihost SCSI bus are set to
the same node allocation class, and all host adapters connected to a
multihost SCSI bus have the same controller letter, as shown in
Figure A-12.
- Use port allocation classes (see OpenVMS Cluster Systems) or HSZ allocation
classes (see Guidelines for OpenVMS Cluster Configurations).
A.6.2.2 Configuring Device IDs on Single-Host SCSI Buses
The device ID selection depends on whether you are using a node
allocation class or a port allocation class. The following discussion
applies to node allocation classes. Refer to OpenVMS Cluster Systems for a
discussion of port allocation classes.
In multihost SCSI configurations, device names generated by OpenVMS use
the format $allocation_class$DKA300. You set the allocation
class using the ALLOCLASS system parameter. OpenVMS generates the
controller letter (for example, A, B, C, and so forth) at boot time by
allocating a letter to each controller. The unit number (for example,
0, 100, 200, 300, and so forth) is derived from the SCSI device ID.
When configuring devices on single-host SCSI buses that are part of a
multihost SCSI configuration, take care to ensure that the disks
connected to the single-host SCSI buses have unique device names. Do
this by assigning different IDs to devices connected to single-host
SCSI buses with the same controller letter on systems that use the same
allocation class. Note that the device names must be different, even
though the bus is not shared.
For example, in Figure A-12, the two disks at the bottom of the
picture are located on SCSI bus A of two systems that use the same
allocation class. Therefore, they have been allocated different device
IDs (in this case, 2 and 3).
For a given allocation class, SCSI device type, and controller letter
(in this example, $4$DKA), there can be up to eight devices in the
cluster, one for each SCSI bus ID. To use all eight IDs, it is
necessary to configure a disk on one SCSI bus at the same ID as a
processor on another bus. See Section A.7.5 for a discussion of the
possible performance impact this can have.
SCSI bus IDs can be effectively "doubled up" by configuring
different SCSI device types at the same SCSI ID on different SCSI
buses. For example, device types DK and MK could produce $4$DKA100 and
$4$MKA100.
A.6.3 Step 3: Power Up and Verify SCSI Devices
After connecting the SCSI cables, power up the system. Enter a console
SHOW DEVICE command to verify that all devices are visible on the SCSI
interconnect.
If there is a SCSI ID conflict, the display may omit devices that are
present, or it may include nonexistent devices. If the display is
incorrect, then check the SCSI ID jumpers on devices, the automatic ID
assignments provided by the StorageWorks shelves, and the console
settings for host adapter and HSZxx controller IDs. If changes
are made, type INIT, then SHOW DEVICE again. If problems persist, check
the SCSI cable lengths and termination.
Example A-1 is a sample output from a console SHOW DEVICE command.
This system has one host SCSI adapter on a private SCSI bus (PKA0), and
two additional SCSI adapters (PKB0 and PKC0), each on separate, shared
SCSI buses.
Example A-1 SHOW DEVICE Command Sample
Output |
>>>SHOW DEVICE
dka0.0.0.6.0 DKA0 RZ26L 442D
dka400.4.0.6.0 DKA400 RRD43 2893
dkb100.1.0.11.0 DKB100 RZ26 392A
dkb200.2.0.11.0 DKB200 RZ26L 442D
dkc400.4.0.12.0 DKC400 HSZ40 V25
dkc401.4.0.12.0 DKC401 HSZ40 V25
dkc500.5.0.12.0 DKC500 HSZ40 V25
dkc501.5.0.12.0 DKC501 HSZ40 V25
dkc506.5.0.12.0 DKC506 HSZ40 V25
dva0.0.0.0.1 DVA0
jkb700.7.0.11.0 JKB700 OpenVMS V62
jkc700.7.0.12.0 JKC700 OpenVMS V62
mka300.3.0.6.0 MKA300 TLZ06 0389
era0.0.0.2.1 ERA0 08-00-2B-3F-3A-B9
pka0.7.0.6.0 PKA0 SCSI Bus ID 7
pkb0.6.0.11.0 PKB0 SCSI Bus ID 6
pkc0.6.0.12.0 PKC0 SCSI Bus ID 6
|
The following list describes the device names in the preceding example:
- DK devices represent SCSI disks. Disks connected to the SCSI bus
controlled by adapter PKA are given device names starting with the
letters DKA. Disks on additional buses are named according to the host
adapter name in a similar manner (DKB devices on adapter PKB, and so
forth).
The next character in the device name represents the
device's SCSI ID. Make sure that the SCSI ID for each device is unique
for the SCSI bus to which it is connected.
- The last digit in the DK device name represents the LUN number.
The HSZ40 virtual DK device in this example is at SCSI ID 4, LUN 1.
Note that some systems do not display devices that have nonzero LUNs.
- JK devices represent nondisk or nontape devices on the SCSI
interconnect. In this example, JK devices represent other processors on
the SCSI interconnect that are running the OpenVMS operating system. If
the other system is not running, these JK devices do not appear in the
display. In this example, the other processor's adapters are at SCSI ID
7.
- MK devices represent SCSI tapes. The A in device MKA300 indicates
that it is attached to adapter PKA0, the private SCSI bus.
- PK devices represent the local SCSI adapters. The SCSI IDs for
these adapters is displayed in the rightmost column. Make sure this is
different from the IDs used by other devices and host adapters on its
bus.
The third character in the device name (in this example, a) is
assigned by the system so that each adapter has a unique name on that
system. The fourth character is always zero.
A.6.4 Step 4: Show and Set SCSI Console Parameters
When creating a SCSI OpenVMS Cluster system, you need to verify the
settings of the console environment parameters shown in Table A-6
and, if necessary, reset their values according to your configuration
requirements.
Table A-6 provides a brief description of SCSI console parameters.
Refer to your system-specific documentation for complete information
about setting these and other system parameters.
Note
The console environment parameters vary, depending on the host adapter
type. Refer to the Installation and User's Guide for your adapter.
|
Table A-6 SCSI Environment Parameters
Parameter |
Description |
bootdef_dev
device_name
|
Specifies the default boot device to the system.
|
boot_osflags
root_number,
bootflag
|
The boot_osflags variable contains information that is used by the
operating system to determine optional aspects of a system bootstrap
(for example, conversational bootstrap).
|
pk*0_disconnect
|
Allows the target to disconnect from the SCSI bus while the target acts
on a command. When this parameter is set to 1, the target is allowed to
disconnect from the SCSI bus while processing a command. When the
parameter is set to 0, the target retains control of the SCSI bus while
acting on a command.
|
pk*0_fast
|
Enables SCSI adapters to perform in fast SCSI mode. When this parameter
is set to 1, the default speed is set to fast mode; when the parameter
is 0, the default speed is standard mode.
|
pk*0_host_id
|
Sets the SCSI device ID of host adapters to a value between 0 and 7.
|
scsi_poll
|
Enables console polling on all SCSI interconnects when the system is
halted.
|
control_scsi_term
|
Enables and disables the terminator on the integral SCSI interconnect
at the system bulkhead (for some systems).
|
Note
If you need to modify any parameters, first change the parameter (using
the appropriate console SET command). Then enter a console INIT command
or press the Reset button to make the change effective.
|
Examples
Before setting boot parameters, display the current settings of these
parameters, as shown in the following examples:
-
>>>SHOW *BOOT*
boot_osflags 10,0
boot_reset OFF
bootdef_dev dka200.2.0.6.0
>>>
|
The first number in the boot_osflags parameter specifies the system
root. (In this example, the first number is 10.) The boot_reset
parameter controls the boot process. The default boot device is the
device from which the OpenVMS operating system is loaded. Refer to the
documentation for your specific system for additional booting
information. Note that you can identify multiple boot devices to
the system. By doing so, you cause the system to search for a bootable
device from the list of devices that you specify. The system then
automatically boots from the first device on which it finds bootable
system software. In addition, you can override the default boot device
by specifying an alternative device name on the boot command line.
Typically, the default boot flags suit your environment. You can
override the default boot flags by specifying boot flags dynamically on
the boot command line with the -flags option.
-
>>>SHOW *PK*
pka0_disconnect 1
pka0_fast 1
pka0_host_id 7
|
The pk*0_disconnect parameter determines whether or not a target is
allowed to disconnect from the SCSI bus while it acts on a command. On
a multihost SCSI bus, the pk*0_disconnect parameter must be
set to 1, so that disconnects can occur. The pk*0_fast parameter
controls whether fast SCSI devices on a SCSI controller perform in
standard or fast mode. When the parameter is set to 0, the default
speed is set to standard mode; when the pk*0_fast parameter is set to
1, the default speed is set to fast SCSI mode. In this example, devices
on SCSI controller pka0 are set to fast SCSI mode. This means that both
standard and fast SCSI devices connected to this controller will
automatically perform at the appropriate speed for the device (that is,
in either fast or standard mode). The pk*0_host_id parameter
assigns a bus node ID for the specified host adapter. In this example,
pka0 is assigned a SCSI device ID of 7.
-
>>>SHOW *POLL*
scsi_poll ON
|
Enables or disables polling of SCSI devices while in console mode.
Set polling ON or OFF depending on the needs and environment of
your site. When polling is enabled, the output of the SHOW DEVICE is
always up to date. However, because polling can consume SCSI bus
bandwidth (proportional to the number of unused SCSI IDs), you might
want to disable polling if one system on a multihost SCSI bus will be
in console mode for an extended time. Polling must be
disabled during any hot-plugging operations. For information about hot
plugging in a SCSI OpenVMS Cluster environment, see Section A.7.6.
-
>>>SHOW *TERM*
control_scsi_term external
|
Used on some systems (such as the AlphaStation 400) to enable or
disable the SCSI terminator next to the external connector. Set the
control_scsi_term parameter to external if a cable is attached to the
bulkhead. Otherwise, set the parameter to internal.
A.6.5 Step 5: Install the OpenVMS Operating System
Refer to the OpenVMS Alpha or VAX upgrade and installation manual for
information about installing the OpenVMS operating system. Perform the
installation once for each system disk in the OpenVMS Cluster system.
In most configurations, there is a single system disk. Therefore, you
need to perform this step once, using any system.
During the installation, when you are asked if the system is to be a
cluster member, answer Yes. Then, complete the installation according
to the guidelines provided in OpenVMS Cluster Systems.
A.6.6 Step 6: Configure Additional Systems
Use the CLUSTER_CONFIG command procedure to configure additional
systems. Execute this procedure once for the second host that you have
configured on the SCSI bus. (See Section A.7.1 for more information.)
A.7 Supplementary Information
The following sections provide supplementary technical detail and
concepts about SCSI OpenVMS Cluster systems.
A.7.1 Running the OpenVMS Cluster Configuration Command Procedure
You execute either the CLUSTER_CONFIG.COM or the CLUSTER_CONFIG_LAN.COM
command procedure to set up and configure nodes in your OpenVMS Cluster
system. Your choice of command procedure depends on whether you use
DECnet or the LANCP utility for booting. CLUSTER_CONFIG.COM uses
DECnet; CLUSTER_CONFIG_LAN.COM uses the LANCP utility. (For information
about using both procedures, see OpenVMS Cluster Systems.)
Typically, the first computer is set up as an OpenVMS Cluster system
during the initial OpenVMS installation procedure (see Section A.6.5).
The CLUSTER_CONFIG procedure is then used to configure additional
nodes. However, if you originally installed OpenVMS without enabling
clustering, the first time you run CLUSTER_CONFIG, the procedure
converts the standalone system to a cluster system.
To configure additional nodes in a SCSI cluster, execute
CLUSTER_CONFIG.COM for each additional node. Table A-7 describes
the steps to configure additional SCSI nodes.
Table A-7 Steps for Installing Additional Nodes
Step |
Procedure |
1
|
From the first node, run the CLUSTER_CONFIG.COM procedure and select
the default option [1] for ADD.
|
2
|
Answer Yes when CLUSTER_CONFIG.COM asks whether you want to proceed.
|
3
|
Supply the DECnet name and address of the node that you are adding to
the existing single-node cluster.
|
4
|
Confirm that this will be a node with a shared SCSI interconnect.
|
5
|
Answer No when the procedure asks whether this node will be a satellite.
|
6
|
Configure the node to be a disk server if it will serve disks to other
cluster members.
|
7
|
Place the new node's system root on the default device offered.
|
8
|
Select a system root for the new node. The first node uses SYS0. Take
the default (SYS10 for the first additional node), or choose your own
root numbering scheme. You can choose from SYS1 to SYS
n, where
n is hexadecimal FFFF.
|
9
|
Select the default disk allocation class so that the new node in the
cluster uses the same ALLOCLASS as the first node.
|
10
|
Confirm whether or not there is a quorum disk.
|
11
|
Answer the questions about the sizes of the page file and swap file.
|
12
|
When CLUSTER_CONFIG.COM completes, boot the new node from the new
system root. For example, for SYSFF on disk DKA200, enter the following
command:
BOOT -FL FF,0 DKA200
In the BOOT command, you can use the following flags:
- -FL indicates boot flags.
- FF is the new system root.
- 0 means there are no special boot requirements, such as
conversational boot.
|
You can run the CLUSTER_CONFIG.COM procedure to set up an additional
node in a SCSI cluster, as shown in Example A-2.
Example A-2 Adding a Node to a SCSI
Cluster |
$ @SYS$MANAGER:CLUSTER_CONFIG
Cluster Configuration Procedure
Use CLUSTER_CONFIG.COM to set up or change an OpenVMS Cluster configuration.
To ensure that you have the required privileges, invoke this procedure
from the system manager's account.
Enter ? for help at any prompt.
1. ADD a node to a cluster.
2. REMOVE a node from the cluster.
3. CHANGE a cluster member's characteristics.
4. CREATE a duplicate system disk for CLU21.
5. EXIT from this procedure.
Enter choice [1]:
The ADD function adds a new node to a cluster.
If the node being added is a voting member, EXPECTED_VOTES in
every cluster member's MODPARAMS.DAT must be adjusted, and the
cluster must be rebooted.
WARNING - If this cluster is running with multiple system disks and
if common system files will be used, please, do not
proceed unless you have defined appropriate logical
names for cluster common files in SYLOGICALS.COM.
For instructions, refer to the OpenVMS Cluster Systems
manual.
Do you want to continue [N]? y
If the new node is a satellite, the network databases on CLU21 are
updated. The network databases on all other cluster members must be
updated.
For instructions, refer to the OpenVMS Cluster Systems manual.
What is the node's DECnet node name? SATURN
What is the node's DECnet node address? 7.77
Is SATURN to be a clustered node with a shared SCSI bus (Y/N)? y
Will SATURN be a satellite [Y]? N
Will SATURN be a boot server [Y]?
This procedure will now ask you for the device name of SATURN's system root.
The default device name (DISK$BIG_X5T5:) is the logical volume name of
SYS$SYSDEVICE:.
What is the device name for SATURN's system root [DISK$BIG_X5T5:]?
What is the name of SATURN's system root [SYS10]? SYS2
Creating directory tree SYS2 ...
System root SYS2 created
NOTE:
All nodes on the same SCSI bus must be members of the same cluster
and must all have the same non-zero disk allocation class or each
will have a different name for the same disk and data corruption
will result.
Enter a value for SATURN's ALLOCLASS parameter [7]:
Does this cluster contain a quorum disk [N]?
Updating network database...
Size of pagefile for SATURN [10000 blocks]?
.
.
.
|
A.7.2 Error Reports and OPCOM Messages in Multihost SCSI Environments
Certain common operations, such as booting or shutting down a host on a
multihost SCSI bus, can cause other hosts on the SCSI bus to experience
errors. In addition, certain errors that are unusual in a single-host
SCSI configuration may occur more frequently on a multihost SCSI bus.
These errors are transient errors that OpenVMS detects, reports, and
recovers from without losing data or affecting applications that are
running. This section describes the conditions that generate these
errors and the messages that are displayed on the operator console and
entered into the error log.
|