 |
Guidelines for OpenVMS Cluster Configurations
7.7.2 Checking the Firmware Version
You can check the installation of the firmware version in two ways:
from the console during system initialization, or by using the efiutil
utility:
- The firmware version is shown in the booting console message that
is displayed during system initialization, as shown in the following
example:
HP 2 Port 2Gb Fibre Channel Adapter (driver 1.40, firmware 3.03.001)
|
- The firmware version number is also shown in the display of the
efiutil info command:
fs0:\efi\hp\tools\io_cards\fc2p2g\efiutil info
Fibre Channel Card Efi Utility 1.20 (1/30/2003)
2 Fibre Channel Adapters found:
Adapter Path WWN Driver (Firmware)
A0 Acpi(000222F0,200)/Pci(1|0) 50060B00001CF2DC 1.40 (3.03.001)
A1 Acpi(000222F0,200)/Pci(1|1) 50060B00001CF2DE 1.40 (3.03.001)
|
7.7.3 Configuring the Boot Device Paths on the FC
For configuration booting on a Fibre Channel storage device, HP
recommends that you use the OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM)
after completing the installation of HP OpenVMS Version 8.2. Follow
these steps:
- From the OpenVMS Installation Menu, choose Option 7 "Execute
DCL commands and procedures" to access the DCL prompt.
- At the DCL prompt, enter the following command to invoke the
OpenVMS I64 Boot Manager utility:
$$$ @SYS$MANAGER:BOOT_OPTIONS
|
- When the utility is invoked, the main menu is displayed. To add
your system disk as a boot option, enter 1 at the prompt, as shown in
the following example:
OpenVMS I64 Boot Manager Boot Options List Management Utility
(1) ADD an entry to the Boot Options list
(2) DISPLAY the Boot Options list
(3) REMOVE an entry from the Boot Options list
(4) MOVE the position of an entry in the Boot Options list
(5) VALIDATE boot options and fix them as necessary
(6) Modify Boot Options TIMEOUT setting
(B) Set to operate on the Boot Device Options list
(D) Set to operate on the Dump Device Options list
(G) Set to operate on the Debug Device Options list
(E) EXIT from the Boot Manager utility
You can also enter Ctrl-Y at any time to abort this utility
Enter your choice: 1
|
Note
While using this utility, you can change a response made to an earlier
prompt by typing the "^" character as many times as needed.
To abort and return to the DCL prompt, enter Ctrl/Y.
|
- The utility prompts you for the device name. Enter the system disk
device you are using for this installation, as in the following example
where the device is a multipath Fibre Channel device $1$DGA1: (press
Return):
Enter the device name (enter "?" for a list of devices): $1$DGA1:
|
- The utility prompts you for the position you want your entry to
take in the EFI boot option list. Enter 1 as in the following example:
Enter the desired position number (1,2,3,,,) of the entry.
To display the Boot Options list, enter "?" and press Return.
Position [1]: 1
|
- The utility prompts you for OpenVMS boot flags. By default, no
flags are set. Enter the OpenVMS flags (for example, 0,1) followed by a
Return, or press Return to set no flags as in the following example:
Enter the value for VMS_FLAGS in the form n,n.
VMS_FLAGS [NONE]:
|
- The utility prompts you for a description to include with your boot
option entry. By default, the device name is used as the description.
You can enter more descriptive information as in the following example.
Enter a short description (do not include quotation marks).
Description ["$1$DGA1"]: $1$DGA1 OpenVMS V8.2 System
efi$bcfg: $1$dga1 (Boot0001) Option successfully added
efi$bcfg: $1$dga1 (Boot0002) Option successfully added
efi$bcfg: $1$dga1 (Boot0003) Option successfully added
|
- When you have successfully added your boot option, exit from the
utility by entering E at the prompt.
- Log out from the DCL prompt and shut down the I64 System.
For more information on this utility, refer to the HP OpenVMS System Manager's Manual, Volume 1: Essentials.
7.8 Setting Up a Storage Controller for Use with OpenVMS
The HP storage array controllers and the manuals that provide specific
information for configuring them for use with OpenVMS follow:
- HSG60/80
HSG80 ACS Solution Software Version 8.6 for Compaq
OpenVMS Installation and Configuration Guide, order number
AA-RH4BD-TE. This manual is available at the following location:
ftp://ftp.compaq.com/pub/products/storageworks/techdoc/raidstorage/AA-RH4BD-TE.pdf
|
- Enterprise Virtual Array
OpenVMS Kit V2.0 for Enterprise
Virtual Array Installation and Configuration Guide, order number
AA-RR03B-TE. This manual is available at the following location:
ftp://ftp.compaq.com/pub/products/storageworks/techdoc/enterprise/AA-RR03B-TE.pdf
|
- HP StorageWorks Modular Smart Array 1000
The documentation for
the MSA1000 is available at the following location:
ftp://ftp.compaq.com/pub/products/storageworks/techdoc/msa1000/
|
- HP StorageWorks XP Disk Array
Product information about HP
StorageWorks XP Arrays is available at the following location:
http://h18006.www1.hp.com/storage/xparrays.html
|
7.8.1 Setting Up the Device Identifier for the CCL
Defining a unique device identifier for the Command Console LUN (CCL)
of the HSG and HSV is not required by OpenVMS, but it may be required
by some management tools. OpenVMS suggests that you always define a
unique device identifier since this identifier causes the creation of a
CCL device visible using the SHOW DEVICE command. Although this device
is not directly controllable on OpenVMS, you can display the multiple
paths to the storage controller using the SHOW DEVICE/FULL command, and
diagnose failed paths, as shown in the following example for $1$GGA3,
where one of the two paths has failed.
Paver> sh dev gg /mul
Device Device Error Current
Name Status Count Paths path
$1$GGA1: Online 0 2/ 2 PGA0.5000-1FE1-0011-AF08
$1$GGA3: Online 0 1/ 2 PGA0.5000-1FE1-0011-B158
$1$GGA4: Online 0 2/ 2 PGA0.5000-1FE1-0015-2C58
$1$GGA5: Online 0 2/ 2 PGA0.5000-1FE1-0015-22A8
$1$GGA6: Online 0 2/ 2 PGA0.5000-1FE1-0015-2D18
$1$GGA7: Online 0 2/ 2 PGA0.5000-1FE1-0015-2D08
$1$GGA9: Online 0 2/ 2 PGA0.5000-1FE1-0007-04E3
Paver> show dev /full $1$gga9:
Device $1$GGA9:, device type Generic SCSI device, is online, shareable, device
has multiple I/O paths.
Error count 0 Operations completed 0
Owner process "" Owner UIC [SYSTEM]
Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:RWPL,W:RWPL
Reference count 0 Default buffer size 0
WWID 02000008:5000-1FE1-0007-04E0
I/O paths to device 2
Path PGA0.5000-1FE1-0007-04E3 (PAVER), primary path, current path.
Error count 0 Operations completed 0
Path PGA0.5000-1FE1-0007-04E1 (PAVER).
Error count 0 Operations completed 0
|
7.8.2 Setting Up the Device Identifier for Disk Devices
The device identifier for disks is appended to the string $1$DGA to
form the complete device name. It is essential that all disks have
unique device identifiers within a cluster. Device identifiers can be
between 0 and 32767, except a device identifier of 0 is not valid on
the HSV. Device identifiers greater than 9999 cannot be MSCP served to
other systems.
7.9 Creating a Cluster with a Shared FC System Disk
To configure nodes in an OpenVMS Cluster system, you must execute the
CLUSTER_CONFIG.COM (or CLUSTER_CONFIG_LAN.COM) command procedure. (You
can run either the full version, which provides more information about
most prompts, or the brief version.)
For the purposes of CLUSTER_CONFIG, a shared Fibre Channel (FC) bus is
treated like a shared SCSI bus, except that the allocation class
parameters do not apply to FC. The rules for setting node allocation
class and port allocation class values remain in effect when parallel
SCSI storage devices are present in a configuration that includes FC
storage devices.
To configure a new OpenVMS Cluster system, you must first enable
clustering on a single, or standalone, system. Then you can add
additional nodes to the cluster.
Example 7-5 shows how to enable clustering using brief version of
CLUSTER_CONFIG_LAN.COM on a standalone node called FCNOD1. At the end
of the procedure, FCNOD1 reboots and forms a one-node cluster.
Example 7-6 shows how to run the brief version of
CLUSTER_CONFIG_LAN.COM on FCNOD1 to add a second node, called FCNOD2,
to form a two-node cluster. At the end of the procedure, the cluster is
configured to allow FCNOD2 to boot off the same FC system disk as
FCNOD1.
The following steps are common to both examples:
- Select the default option [1] for ADD.
- Answer Yes when CLUSTER_CONFIG_LAN.COM asks whether there will be a
shared SCSI bus. SCSI in this context refers to FC as well as to
parallel SCSI.
The allocation class parameters are not affected by
the presence of FC.
- Answer No when the procedure asks whether the node will be a
satellite.
Example 7-5 Enabling Clustering on a
Standalone FC Node |
$ @CLUSTER_CONFIG_LAN BRIEF
Cluster Configuration Procedure
Executing on an Alpha System
DECnet Phase IV is installed on this node.
The LAN, not DECnet, will be used for MOP downline loading.
This Alpha node is not currently a cluster member
MAIN MENU
1. ADD FCNOD1 to existing cluster, or form a new cluster.
2. MAKE a directory structure for a new root on a system disk.
3. DELETE a root from a system disk.
4. EXIT from this procedure.
Enter choice [1]: 1
Is the node to be a clustered node with a shared SCSI or Fibre Channel bus (Y/N)? Y
Note:
Every cluster node must have a direct connection to every other
node in the cluster. Since FCNOD1 will be a clustered node with
a shared SCSI or FC bus, and Memory Channel, CI, and DSSI are not present,
the LAN will be used for cluster communication.
Enter this cluster's group number: 511
Enter this cluster's password:
Re-enter this cluster's password for verification:
Will FCNOD1 be a boot server [Y]? Y
Verifying LAN adapters in LANACP database...
Updating LANACP LAN server process volatile and permanent databases...
Note: The LANACP LAN server process will be used by FCNOD1 for boot
serving satellites. The following LAN devices have been found:
Verifying LAN adapters in LANACP database...
LAN TYPE ADAPTER NAME SERVICE STATUS
======== ============ ==============
Ethernet EWA0 ENABLED
CAUTION: If you do not define port allocation classes later in this
procedure for shared SCSI buses, all nodes sharing a SCSI bus
must have the same non-zero ALLOCLASS value. If multiple
nodes connect to a shared SCSI bus without the same allocation
class for the bus, system booting will halt due to the error or
IO AUTOCONFIGURE after boot will keep the bus offline.
Enter a value for FCNOD1's ALLOCLASS parameter [0]: 5
Does this cluster contain a quorum disk [N]? N
Each shared SCSI bus must have a positive allocation class value. A shared
bus uses a PK adapter. A private bus may use: PK, DR, DV.
When adding a node with SCSI-based cluster communications, the shared
SCSI port allocation classes may be established in SYS$DEVICES.DAT.
Otherwise, the system's disk allocation class will apply.
A private SCSI bus need not have an entry in SYS$DEVICES.DAT. If it has an
entry, its entry may assign any legitimate port allocation class value:
n where n = a positive integer, 1 to 32767 inclusive
0 no port allocation class and disk allocation class does not apply
-1 system's disk allocation class applies (system parameter ALLOCLASS)
When modifying port allocation classes, SYS$DEVICES.DAT must be updated
for all affected nodes, and then all affected nodes must be rebooted.
The following dialog will update SYS$DEVICES.DAT on FCNOD1.
There are currently no entries in SYS$DEVICES.DAT for FCNOD1.
After the next boot, any SCSI controller on FCNOD1 will use
FCNOD1's disk allocation class.
Assign port allocation class to which adapter [RETURN for none]: PKA
Port allocation class for PKA0: 10
Port Alloclass 10 Adapter FCNOD1$PKA
Assign port allocation class to which adapter [RETURN for none]: PKB
Port allocation class for PKB0: 20
Port Alloclass 10 Adapter FCNOD1$PKA
Port Alloclass 20 Adapter FCNOD1$PKB
WARNING: FCNOD1 will be a voting cluster member. EXPECTED_VOTES for
this and every other cluster member should be adjusted at
a convenient time before a reboot. For complete instructions,
check the section on configuring a cluster in the "OpenVMS
Cluster Systems" manual.
Execute AUTOGEN to compute the SYSGEN parameters for your configuration
and reboot FCNOD1 with the new parameters. This is necessary before
FCNOD1 can become a cluster member.
Do you want to run AUTOGEN now [Y]? Y
Running AUTOGEN -- Please wait.
The system is shutting down to allow the system to boot with the
generated site-specific parameters and installed images.
The system will automatically reboot after the shutdown and the
upgrade will be complete.
|
Example 7-6 Adding a Node to a Cluster with a
Shared FC System Disk |
$ @CLUSTER_CONFIG_LAN BRIEF
Cluster Configuration Procedure
Executing on an Alpha System
DECnet Phase IV is installed on this node.
The LAN, not DECnet, will be used for MOP downline loading.
FCNOD1 is an Alpha system and currently a member of a cluster
so the following functions can be performed:
MAIN MENU
1. ADD an Alpha node to the cluster.
2. REMOVE a node from the cluster.
3. CHANGE a cluster member's characteristics.
4. CREATE a duplicate system disk for FCNOD1.
5. MAKE a directory structure for a new root on a system disk.
6. DELETE a root from a system disk.
7. EXIT from this procedure.
Enter choice [1]: 1
This ADD function will add a new Alpha node to the cluster.
WARNING: If the node being added is a voting member, EXPECTED_VOTES for
every cluster member must be adjusted. For complete instructions
check the section on configuring a cluster in the "OpenVMS Cluster
Systems" manual.
CAUTION: If this cluster is running with multiple system disks and
common system files will be used, please, do not proceed
unless appropriate logical names are defined for cluster
common files in SYLOGICALS.COM. For instructions, refer to
the "OpenVMS Cluster Systems" manual.
Is the node to be a clustered node with a shared SCSI or Fibre Channel bus (Y/N)? Y
Will the node be a satellite [Y]? N
What is the node's SCS node name? FCNOD2
What is the node's SCSSYSTEMID number? 19.111
NOTE: 19.111 equates to an SCSSYSTEMID of 19567
Will FCNOD2 be a boot server [Y]? Y
What is the device name for FCNOD2's system root
[default DISK$V72_SSB:]? Y
What is the name of FCNOD2's system root [SYS10]?
Creating directory tree SYS10 ...
System root SYS10 created
CAUTION: If you do not define port allocation classes later in this
procedure for shared SCSI buses, all nodes sharing a SCSI bus
must have the same non-zero ALLOCLASS value. If multiple
nodes connect to a shared SCSI bus without the same allocation
class for the bus, system booting will halt due to the error or
IO AUTOCONFIGURE after boot will keep the bus offline.
Enter a value for FCNOD2's ALLOCLASS parameter [5]:
Does this cluster contain a quorum disk [N]? N
Size of pagefile for FCNOD2 [RETURN for AUTOGEN sizing]?
A temporary pagefile will be created until resizing by AUTOGEN. The
default size below is arbitrary and may or may not be appropriate.
Size of temporary pagefile [10000]?
Size of swap file for FCNOD2 [RETURN for AUTOGEN sizing]?
A temporary swap file will be created until resizing by AUTOGEN. The
default size below is arbitrary and may or may not be appropriate.
Size of temporary swap file [8000]?
Each shared SCSI bus must have a positive allocation class value. A shared
bus uses a PK adapter. A private bus may use: PK, DR, DV.
When adding a node with SCSI-based cluster communications, the shared
SCSI port allocation classes may be established in SYS$DEVICES.DAT.
Otherwise, the system's disk allocation class will apply.
A private SCSI bus need not have an entry in SYS$DEVICES.DAT. If it has an
entry, its entry may assign any legitimate port allocation class value:
n where n = a positive integer, 1 to 32767 inclusive
0 no port allocation class and disk allocation class does not apply
-1 system's disk allocation class applies (system parameter ALLOCLASS)
When modifying port allocation classes, SYS$DEVICES.DAT must be updated
for all affected nodes, and then all affected nodes must be rebooted.
The following dialog will update SYS$DEVICES.DAT on FCNOD2.
Enter [RETURN] to continue:
$20$DKA400:<VMS$COMMON.SYSEXE>SYS$DEVICES.DAT;1 contains port
allocation classes for FCNOD2. After the next boot, any SCSI
controller not assigned in SYS$DEVICES.DAT will use FCNOD2's
disk allocation class.
Assign port allocation class to which adapter [RETURN for none]: PKA
Port allocation class for PKA0: 11
Port Alloclass 11 Adapter FCNOD2$PKA
Assign port allocation class to which adapter [RETURN for none]: PKB
Port allocation class for PKB0: 20
Port Alloclass 11 Adapter FCNOD2$PKA
Port Alloclass 20 Adapter FCNOD2$PKB
Assign port allocation class to which adapter [RETURN for none]:
WARNING: FCNOD2 must be rebooted to make port allocation class
specifications in SYS$DEVICES.DAT take effect.
Will a disk local only to FCNOD2 (and not accessible at this time to FCNOD1)
be used for paging and swapping (Y/N)? N
If you specify a device other than DISK$V72_SSB: for FCNOD2's
page and swap files, this procedure will create PAGEFILE_FCNOD2.SYS
and SWAPFILE_FCNOD2.SYS in the [SYSEXE] directory on the device you
specify.
What is the device name for the page and swap files [DISK$V72_SSB:]?
%SYSGEN-I-CREATED, $20$DKA400:[SYS10.SYSEXE]PAGEFILE.SYS;1 created
%SYSGEN-I-CREATED, $20$DKA400:[SYS10.SYSEXE]SWAPFILE.SYS;1 created
The configuration procedure has completed successfully.
FCNOD2 has been configured to join the cluster.
The first time FCNOD2 boots, NETCONFIG.COM and
AUTOGEN.COM will run automatically.
|
7.9.1 Configuring Additional Cluster Nodes to Boot with a Shared FC Disk (I64 Only)
For configuring additional nodes to boot with a shared FC Disk on an
OpenVMS Cluster system, HP requires that you execute the OpenVMS I64
Boot Manager (BOOT_OPTIONS.COM).
After you have enabled clustering on a single or standalone system, you
can add additional I64 nodes to boot on a shared FC Disk, as follows:
- Boot the HP OpenVMS Version 8.2 Installation Disk on the target
node.
- From the OpenVMS Installation Menu, choose Option 7 "Execute
DCL commands and procedures."
- Follow the instructions in Section 7.7.3. Make sure that you set
the correct system root when asked to enter the OpenVMS boot flags.
Note
The OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility requires the
shared FC disk to be mounted. If the shared FC disk is not mounted
cluster-wide, the utility will try to mount the disk with a /NOWRITE
option. If the shared FC disk is already mounted cluster-wide, user
intervention is required. For more information on this utility, refer
to the HP OpenVMS System Manager's Manual, Volume 1: Essentials.
|
7.9.2 Online Reconfiguration
The FC interconnect can be reconfigured while the hosts are running
OpenVMS. This includes the ability to:
- Add, move, or remove FC switches and HSGs.
- Add, move, or remove HSG virtual disk units.
- Change the device identifier or LUN value of the HSG virtual disk
units.
- Disconnect and reconnect FC cables. Reconnection can be to the same
or different adapters, switch ports, or HSG ports.
OpenVMS does not automatically detect most FC reconfigurations. You
must use the following procedure to safely perform an FC
reconfiguration, and to ensure that OpenVMS has adjusted its internal
data structures to match the new state:
- Dismount all disks that are involved in the reconfiguration.
- Perform the reconfiguration.
- Enter the following commands on each host that is connected to the
Fibre Channel:
SYSMAN> IO SCSI_PATH_VERIFY
SYSMAN> IO AUTOCONFIGURE
|
The purpose of the SCSI_PATH_VERIFY command is to check each FC path in
the system's IO database to determine whether the attached device has
been changed. If a device change is detected, then the FC path is
disconnected in the IO database. This allows the path to be
reconfigured for a new device by using the IO AUTOCONFIGURE command.
Note
In the current release, the SCSI_PATH_VERIFY command only operates on
FC disk devices. It does not operate on generic FC devices, such as the
HSG command console LUN (CCL). (Generic FC devices have names such as
$1$GGAnnnnn.
This means that once the CCL of an HSG has been configured by OpenVMS
with a particular device identifier, its device identifier should not
be changed.
|
|