HP OpenVMS Systems Documentation

Content starts here

OpenVMS Cluster Systems


Previous Contents Index


Chapter 7
Setting Up and Managing Cluster Queues

This chapter discusses queuing topics specific to OpenVMS Cluster systems. Because queues in an OpenVMS Cluster system are established and controlled with the same commands used to manage queues on a standalone computer, the discussions in this chapter assume some knowledge of queue management on a standalone system, as described in the OpenVMS System Manager's Manual.

Note: See the OpenVMS System Manager's Manual for information about queuing compatibility.

7.1 Introduction

Users can submit jobs to any queue in the OpenVMS Cluster system, regardless of the processor on which the job will actually execute. Generic queues can balance the work load among the available processors.

The system manager can use one or several queue managers to manage batch and print queues for an entire OpenVMS Cluster system. Although a single queue manager is sufficient for most systems, multiple queue managers can be useful for distributing the batch and print work load across nodes in the cluster.

Note: OpenVMS Cluster systems that include both VAX and Alpha computers must use the queue manager described in this chapter.

7.2 Controlling Queue Availability

Once the batch and print queue characteristics are set up, the system manager can rely on the distributed queue manager to make queues available across the cluster.

The distributed queue manager prevents the queuing system from being affected when a node enters or leaves the cluster during cluster transitions. The following table describes how the distributed queue manager works.

WHEN... THEN... Comments
The node on which the queue manager is running leaves the OpenVMS Cluster system. The queue manager automatically fails over to another node. This failover occurs transparently to users on the system.
Nodes are added to the cluster. The queue manager automatically serves the new nodes. The system manager does not need to enter a command explicitly to start queuing on the new node.
The OpenVMS Cluster system reboots. The queuing system automatically restarts by default. Thus, you do not have to include commands in your startup command procedure for queuing.
  The operating system automatically restores the queuing system with the parameters defined in the queuing database. This is because when you start the queuing system, the characteristics you define are stored in a queue database.

To control queues, the queue manager maintains a clusterwide queue database that stores information about queues and jobs. Whether you use one or several queue managers, only one queue database is shared across the cluster. Keeping the information for all processes in one database allows jobs submitted from any computer to execute on any queue (provided that the necessary mass storage devices are accessible).

7.3 Starting a Queue Manager and Creating the Queue Database

You start up a queue manager using the START/QUEUE/MANAGER command as you would on a standalone computer. However, in an OpenVMS Cluster system, you can also provide a failover list and a unique name for the queue manager. The /NEW_VERSION qualifier creates a new queue database.

The following command example shows how to start a queue manager:


$ START/QUEUE/MANAGER/NEW_VERSION/ON=(GEM,STONE,*)

The following table explains the components of this sample command.

Command Function
START/QUEUE/MANAGER Creates a single, clusterwide queue manager named SYS$QUEUE_MANAGER.
/NEW_VERSION Creates a new queue database in SYS$COMMON:[SYSEXE] that consists of the following three files:
  • QMAN$MASTER.DAT (master file)
  • SYS$QUEUE_MANAGER.QMAN$QUEUES (queue file)
  • SYS$QUEUE_MANAGER.QMAN$JOURNAL (journal file)

Rule: Use the /NEW_VERSION qualifier only on the first invocation of the queue manager or if you want to create a new queue database.

/ON= (node-list)
[optional]
Specifies an ordered list of nodes that can claim the queue manager if the node running the queue manager should exit the cluster. In the example:
  • The queue manager process starts on node GEM.
  • If the queue manager is running on node GEM and GEM leaves the cluster, the queue manager fails over to node STONE.
  • The asterisk wildcard (*) is specified as the last node in the node list to indicate that any remaining, unlisted nodes can start the queue manager in any order.

    Rules: Complete node names are required; you cannot specify the asterisk wildcard character as part of a node name.

    If you want to exclude certain nodes from being eligible to run the queue manager, do not use the asterisk wildcard character in the node list.

/NAME_OF_MANAGER
[optional]
Allows you to assign a unique name to the queue manager. Unique queue manager names are necessary if you run multiple queue managers. For example, using the /NAME_OF_MANAGER qualifier causes queue and journal files to be created using the queue manager name instead of the default name SYS$QUEUE_MANAGER. For example, adding the /NAME_OF_MANAGER=PRINT_MANAGER qualifier command creates these files:
QMAN$MASTER.DAT
PRINT_MANAGER.QMAN$QUEUES
PRINT_MANAGER.QMAN$JOURNAL
Rules for OpenVMS Cluster systems with multiple system disks:
  • Specify the locations of both the master file and the queue and journal files for systems that do not boot from the system disk where the files are located.

    Reference: If you want to locate the queue database files on other devices or directories, refer to the OpenVMS System Manager's Manual for instructions.

  • Specify a device and directory that is accessible across the OpenVMS Cluster.
  • Define the device and directory identically in the SYS$COMMON:SYLOGICALS.COM startup command procedure on every node.

7.4 Starting Additional Queue Managers

Running multiple queue managers balances the work load by distributing batch and print jobs across the cluster. For example, you might create separate queue managers for batch and print queues in clusters with CPU or memory shortages. This allows the batch queue manager to run on one node while the print queue manager runs on a different node.

7.4.1 Command Format

To start additional queue managers, include the /ADD and /NAME_OF_MANAGER qualifiers on the START/QUEUE/MANAGER command. Do not specify the /NEW_VERSION qualifier. For example:


$ START/QUEUE/MANAGER/ADD/NAME_OF_MANAGER=BATCH_MANAGER

7.4.2 Database Files

Multiple queue managers share one QMAN$MASTER.DAT master file, but an additional queue file and journal file is created for each queue manager. The additional files are named in the following format, respectively:

  • name_of_manager.QMAN$QUEUES
  • name_of_manager.QMAN$JOURNAL

By default, the queue database and its files are located in SYS$COMMON:[SYSEXE]. If you want to relocate the queue database files, refer to the instructions in Section 7.6.

7.5 Stopping the Queuing System

When you enter the STOP/QUEUE/MANAGER/CLUSTER command, the queue manager remains stopped, and requests for queuing are denied until you enter the START/QUEUE/MANAGER command (without the /NEW_VERSION qualifier).

The following command shows how to stop a queue manager named PRINT_MANAGER:


$ STOP/QUEUE/MANAGER/CLUSTER/NAME_OF_MANAGER=PRINT_MANAGER

Rule: You must include the /CLUSTER qualifier on the command line whether or not the queue manager is running on an OpenVMS Cluster system. If you omit the /CLUSTER qualifier, the command stops all queues on the default node without stopping the queue manager. (This has the same effect as entering the STOP/QUEUE/ON_NODE command.)

7.6 Moving Queue Database Files

The files in the queue database can be relocated from the default location of SYS$COMMON:[SYSEXE] to any disk that is mounted clusterwide or that is accessible to the computers participating in the clusterwide queue scheme. For example, you can enhance system performance by locating the database on a shared disk that has a low level of activity.

7.6.1 Location Guidelines

The master file QMAN$MASTER can be in a location separate from the queue and journal files, but the queue and journal files must be kept together in the same directory. The queue and journal files for one queue manager can be separate from those of other queue managers.

The directory you specify must be available to all nodes in the cluster. If the directory specification is a concealed logical name, it must be defined identically in the SYS$COMMON:SYLOGICALS.COM startup command procedure on every node in the cluster.

Reference: The OpenVMS System Manager's Manual contains complete information about creating or relocating the queue database files. See also Section 7.12 for a sample common procedure that sets up an OpenVMS Cluster batch and print system.

7.7 Setting Up Print Queues

To establish print queues, you must determine the type of queue configuration that best suits your OpenVMS Cluster system. You have several alternatives that depend on the number and type of print devices you have on each computer and on how you want print jobs to be processed. For example, you need to decide:

  • Which print queues you want to establish on each computer
  • Whether to set up any clusterwide generic queues to distribute print job processing across the cluster
  • Whether to set up autostart queues for availability or improved startup time

Once you determine the appropriate strategy for your cluster, you can create your queues. Figure 7-1 shows the printer configuration for a cluster consisting of the active computers JUPITR, SATURN, and URANUS.

Figure 7-1 Sample Printer Configuration


7.7.1 Creating a Queue

You set up OpenVMS Cluster print queues using the same method that you would use for a standalone computer. However, in an OpenVMS Cluster system, you must provide a unique name for each queue you create.

7.7.2 Command Format

You create and name a print queue by specifying the INITIALIZE/QUEUE command at the DCL prompt in the following format:


INITIALIZE/QUEUE/ON=node-name::device[/START][/NAME_OF_MANAGER=name-of-manager] queue-name

Qualifier Description
/ON Specifies the computer and printer to which the queue is assigned. If you specify the /START qualifier, the queue is started.
/NAME_OF_MANAGER If you are running multiple queue managers, you should also specify the queue manager with the qualifier.

7.7.3 Ensuring Queue Availability

You can also use the autostart feature to simplify startup and ensure high availability of execution queues in an OpenVMS Cluster. If the node on which the autostart queue is running leaves the OpenVMS Cluster, the queue automatically fails over to the next available node on which autostart is enabled. Autostart is particularly useful on LAT queues. Because LAT printers are usually shared among users of multiple systems or in OpenVMS Cluster systems, many users are affected if a LAT queue is unavailable.

Format for creating autostart queues:

Create an autostart queue with a list of nodes on which the queue can run by specifying the DCL command INITIALIZE/QUEUE in the following format:


INITIALIZE/QUEUE/AUTOSTART_ON=(node-name::device:,node-name::device:,...) queue-name

When you use the /AUTOSTART_ON qualifier, you must initially activate the queue for autostart, either by specifying the /START qualifier with the INITIALIZE /QUEUE command or by entering a START/QUEUE command. However, the queue cannot begin processing jobs until the ENABLE AUTOSTART /QUEUES command is entered for a node on which the queue can run. Generic queues cannot be autostart queues.

Rules: Generic queues cannot be autostart queues. Note that you cannot specify both /ON and /AUTOSTART_ON.

Reference: Refer to Section 7.13 for information about setting the time at which autostart is disabled.

7.7.4 Examples

The following commands make the local print queue assignments for JUPITR shown in Figure 7-2 and start the queues:


$ INITIALIZE/QUEUE/ON=JUPITR::LPA0/START/NAME_OF_MANAGER=PRINT_MANAGER JUPITR_LPA0
$ INITIALIZE/QUEUE/ON=JUPITR::LPB0/START/NAME_OF_MANAGER=PRINT_MANAGER JUPITR_LPB0

Figure 7-2 Print Queue Configuration


7.8 Setting Up Clusterwide Generic Print Queues

The clusterwide queue database enables you to establish generic queues that function throughout the cluster. Jobs queued to clusterwide generic queues are placed in any assigned print queue that is available, regardless of its location in the cluster. However, the file queued for printing must be accessible to the computer to which the printer is connected.

7.8.1 Sample Configuration

Figure 7-3 illustrates a clusterwide generic print queue in which the queues for all LPA0 printers in the cluster are assigned to a clusterwide generic queue named SYS$PRINT.

A clusterwide generic print queue needs to be initialized and started only once. The most efficient way to start your queues is to create a common command procedure that is executed by each OpenVMS Cluster computer (see Section 7.12.3).

Figure 7-3 Clusterwide Generic Print Queue Configuration


7.8.2 Command Example

The following command initializes and starts the clusterwide generic queue SYS$PRINT:


$ INITIALIZE/QUEUE/GENERIC=(JUPITR_LPA0,SATURN_LPA0,URANUS_LPA0)/START SYS$PRINT

Jobs queued to SYS$PRINT are placed in whichever assigned print queue is available. Thus, in this example, a print job from JUPITR that is queued to SYS$PRINT can be queued to JUPITR_LPA0, SATURN_LPA0, or URANUS_LPA0.

7.9 Setting Up Execution Batch Queues

Generally, you set up execution batch queues on each OpenVMS Cluster computer using the same procedures you use for a standalone computer. For more detailed information about how to do this, see the OpenVMS System Manager's Manual.

7.9.1 Before You Begin

Before you establish batch queues, you should decide which type of queue configuration best suits your cluster. As system manager, you are responsible for setting up batch queues to maintain efficient batch job processing on the cluster. For example, you should do the following:

  • Determine what type of processing will be performed on each computer.
  • Set up local batch queues that conform to these processing needs.
  • Decide whether to set up any clusterwide generic queues that will distribute batch job processing across the cluster.
  • Decide whether to use autostart queues for startup simplicity.

Once you determine the strategy that best suits your needs, you can create a command procedure to set up your queues. Figure 7-4 shows a batch queue configuration for a cluster consisting of computers JUPITR, SATURN, and URANUS.

Figure 7-4 Sample Batch Queue Configuration


7.9.2 Batch Command Format

You create a batch queue with a unique name by specifying the DCL command INITIALIZE/QUEUE/BATCH in the following format:

INITIALIZE/QUEUE/BATCH/ON=node::[/START][/NAME_OF_MANAGER=name-of-manager] queue-name

Qualifier Description
/ON Specifies the computer on which the batch queue runs.
/START Starts the queue.
/NAME_OF_MANAGER Specifies the name of the queue manager if you are running multiple queue managers.

7.9.3 Autostart Command Format

You can initialize and start an autostart batch queue by specifying the DCL command INITIALIZE/QUEUE/BATCH. Use the following command format:


INITIALIZE/QUEUE/BATCH/AUTOSTART_ON=node::queue-name

When you use the /AUTOSTART_ON qualifier, you must initially activate the queue for autostart, either by specifying the /START qualifier with the INITIALIZE/QUEUE command or by entering a START/QUEUE command. However, the queue cannot begin processing jobs until the ENABLE AUTOSTART /QUEUES command is entered for a node on which the queue can run.

Rule: Generic queues cannot be autostart queues. Note that you cannot specify both /ON and /AUTOSTART_ON.

7.9.4 Examples

The following commands make the local batch queue assignments for JUPITR, SATURN, and URANUS shown in Figure 7-4:


$ INITIALIZE/QUEUE/BATCH/ON=JUPITR::/START/NAME_OF_MANAGER=BATCH_QUEUE JUPITR_BATCH
$ INITIALIZE/QUEUE/BATCH/ON=SATURN::/START/NAME_OF_MANAGER=BATCH_QUEUE SATURN_BATCH
$ INITIALIZE/QUEUE/BATCH/ON=URANUS::/START/NAME_OF_MANAGER=BATCH_QUEUE URANUS_BATCH

Because batch jobs on each OpenVMS Cluster computer are queued to SYS$BATCH by default, you should consider defining a logical name to establish this queue as a clusterwide generic batch queue that distributes batch job processing throughout the cluster (see Example 7-2). Note, however, that you should do this only if you have a common-environment cluster.

7.10 Setting Up Clusterwide Generic Batch Queues

In an OpenVMS Cluster system, you can distribute batch processing among computers to balance the use of processing resources. You can achieve this workload distribution by assigning local batch queues to one or more clusterwide generic batch queues. These generic batch queues control batch processing across the cluster by placing batch jobs in assigned batch queues that are available. You can create a clusterwide generic batch queue as shown in Example 7-2.

A clusterwide generic batch queue needs to be initialized and started only once. The most efficient way to perform these operations is to create a common command procedure that is executed by each OpenVMS Cluster computer (see Example 7-2).

7.10.1 Sample Configuration

In Figure 7-5, batch queues from each OpenVMS Cluster computer are assigned to a clusterwide generic batch queue named SYS$BATCH. Users can submit a job to a specific queue (for example, JUPITR_BATCH or SATURN_BATCH), or, if they have no special preference, they can submit it by default to the clusterwide generic queue SYS$BATCH. The generic queue in turn places the job in an available assigned queue in the cluster.

If more than one assigned queue is available, the operating system selects the queue that minimizes the ratio (executing jobs/job limit) for all assigned queues.

Figure 7-5 Clusterwide Generic Batch Queue Configuration


7.11 Starting Local Batch Queues

Normally, you use local batch execution queues during startup to run batch jobs to start layered products. For this reason, these queues must be started before the ENABLE AUTOSTART command is executed, as shown in the command procedure in Example 7-1.

7.11.1 Startup Command Procedure

Start the local batch execution queue in each node's startup command procedure SYSTARTUP_VMS.COM. If you use a common startup command procedure, add commands similar to the following to your procedure:


$  SUBMIT/PRIORITY=255/NOIDENT/NOLOG/QUEUE=node_BATCH LAYERED_PRODUCT.COM
$  START/QUEUE node_BATCH
$  DEFINE/SYSTEM/EXECUTIVE SYS$BATCH node_BATCH

Submitting the startup command procedure LAYERED_PRODUCT.COM as a high-priority batch job before the queue starts ensures that the job is executed immediately, regardless of the job limit on the queue. If the queue is started before the command procedure was submitted, the queue might reach its job limit by scheduling user batch jobs, and the startup job would have to wait.


Previous Next Contents Index