Document revision date: 28 June 1999 | |
Previous | Contents | Index |
Managing a DECdfs for OpenVMS client involves coordinating the values of certain interrelated parameters on your system and then mounting DECdfs access points. This creates the client devices on your system.
This chapter describes the following management tasks:
Most of these tasks involve the use of DFS$CONTROL commands and qualifiers. For complete information on a specific command, see Chapter 4. For an overall perspective on DECdfs, read Chapter 2, even if you manage a client-only node. Certain topics covered in Chapter 2 affect both the client and server.
After you read this chapter, set the necessary system and network parameters and edit the DFS$CONFIG.COM and DFS$SYSTARTUP.COM files. You can then start DECdfs on your system by executing the DFS$STARTUP.COM file.
A major difference between the server and client is as follows: the server resides in its own process on your system, whereas no explicit client process exists. The client resides in the DFSC device driver. Managing a client involves managing the client devices. |
Running DECdfs on a client system may require that you adjust the SYSGEN parameter NPAGEDYN. Adjust this before installation, as described in the DECdfs for OpenVMS Installation Guide.
DECdfs provides excellent performance when your system uses the
default network and RMS parameters. However, you might improve
DECdfs client performance by setting these parameters as
described in Appendix C.
3.2 Mounting Access Points
To mount an access point, use the DFS$CONTROL command MOUNT. You can mount only access points that the server manager has added. How access points are added and mounted is described in Section 2.3.2. For further information on the MOUNT command and its qualifiers, refer to Chapter 4.
To display a list of the available access points, use the SHOW ACCESS_POINT command. To simplify operation, place the MOUNT commands in the DFS$SYSTARTUP command file.
The MOUNT command mounts the client device to enable access by all users and jobs on the client system. That is, the DFSC device can be accessed by users other than the one who mounted it. However, access to files on the server is controlled based on the client user making the reference, not the user who mounted the device.
If /SYSTEM or /GROUP qualifiers are used, any associated logical name is defined in the system or group logical name table respectively. Also, any subsequent attempts to mount the same access point will fail with the message:
%MOUNT-VOLALRMNT, another volume of same label already mounted |
If neither the /SYSTEM or /GROUP qualifier is specified, the mount
command allocates a new DFSC unit even if another user already has the
same access point mounted.
3.2.1 Assigning Device Unit Numbers
Mounting an access point creates a new client device on your system. DECdfs copies this device from the template device DFSC0:. DECdfs creates DFSC0: at startup, when it loads DFSCDRIVER.EXE, the client driver image. DECdfs then copies the I/O database data structures for each subsequent DFSC device from the template. As you mount access points, OpenVMS sequentially assigns a unit number to each new DFSC device, starting with unit number 1001. The first access point you mount creates DFSC1001:, the second access point creates DFSC1002:, and so on.
The MOUNT command has a /DEVICE qualifier that allows you to specify
the device unit number. If you manage an OpenVMS Cluster system as a
DECdfs client, this feature ensures that the same device number
is mounted on all cluster members. Otherwise, DECdfs's default
numbering could assign different device unit numbers to the same access
point on different cluster members.
3.2.2 Assigning Logical Names
When you mount an access point, you can use the MOUNT command parameter
local-logical-name to assign a logical name to the DFSC device. Compaq
recommends that you use logical names. Because the order in which DFSC
devices are created can vary, their unit numbers can also vary.
Referring to the devices by consistent logical names simplifies both
management and use.
3.2.3 Specifying Volume Names
The MOUNT command's /VOLUME_NAME qualifier allows you to specify a volume name for the client device. This name identifies the device in the display from the DCL command SHOW DEVICE.
The volume name has a limit of 12 characters. If you do not specify a volume name, the access point name becomes the default volume name if it has 12 or fewer characters. If the access point name has more than 12 characters, the default volume name consists of the first 5 characters of the access point name, 2 periods (..), and the last 5 characters of the access point name.
Specifying a volume name for the client device does not affect the volume name on the actual device at the server. |
Data checking causes the server to ensure the integrity of data between the disk and the OpenVMS system on the server. When you mount an access point, you can request a data check on read-only operations, write-only operations, or both read and write operations for the client device. To do so, include the /DATA_CHECK qualifier with the MOUNT command.
Data checking takes place at the server. You can request data checking on the client device whether or not the system manager at the server mounted the actual physical device with data checking enabled. If the physical device has data checking enabled, your request does not cause redundant data checking. If the device does not have data checking enabled, your request causes data checking only on your own client's use of the access point.
For a description of data checking on a disk, see the OpenVMS I/O User's Reference Manual.
3.2.5 Mounting Alternative Access Points
An access point can be served by a cluster as well as by an individual node. If the server is a common-environment cluster, the DECdfs manager can register the cluster alias as the access point's location. This allows any node to process incoming requests for the access point. Consequently, the client has to mount only the cluster device. For more information on OpenVMS Cluster systems, see the OpenVMS Cluster Systems manual. For more information on cluster aliases, see the the DECnet for OpenVMS Network Management Utilities manual or the DECnet-Plus for OpenVMS Network Management manual.
If the server manager does not want all nodes with incoming alias
enabled to serve the access point, he or she can add the access point
from more than one node, giving the access point a different,
alternative name on each. The client manager can then choose an access
point name and can also select another name later if problems arise
with the first choice.
3.3 Displaying Client Device Information
The DCL command SHOW DEVICE provides information about the client devices on your system. The device information in the display resembles that for other devices, except that DECdfs does not report the number of free blocks for a client device. The Free Blocks field displays a row of asterisks, as in the following example:
$ SHOW DEVICE DFSC1: Device Device Error Volume Free Trans Mnt Name Status Count Label Blocks Count Cnt DFSC1: Mounted 0 HELP ***** 2 1 |
With the /FULL qualifier, the command displays the number 4294967295 in the Free Blocks field. This number is always the same and does not actually represent a count of free blocks.
The DFS$CONTROL command SHOW CLIENT provides information on a specific client device. For the specified device, the command displays the device status, name of the associated access point, the server node, and number of free blocks. For example:
DFS> SHOW CLIENT SATURN Client Device SATURN (Translates to _DFSC1001:) Status = Available Access Point = DEC:.LKG.S.TANTS.RANGER_SATURN Node = TOOTER Free blocks = 71358 |
Optionally, you can also request activity counters for the device with the /COUNTERS qualifier. The /ALL qualifier requests the counters and above information. Table 3-1 lists and explains the client counters. The counters indicate use starting from the time that you created the device by mounting an access point.
Counter | Description |
---|---|
File Operations Performed | The total number of all file (XQP) QIO functions issued to the device. |
Bytes Read | The total number of bytes read from this device by user IO$_READVBLK function codes. |
Bytes Written | The total number of bytes written to this device by user IO$_WRITEVBLK function codes. |
Files Opened | The total number of files that this device has opened. |
Mount Verifications Tried | The total number of times that this device attempted to recover from the unavailability of a server node, a server, the Communication Entity, or the DECnet network. |
Use these client counters to measure DECdfs use at your system.
Some mount verifications probably will occur routinely. Once you know
the normal frequency of mount verifications, you can monitor the Mount
Verifications Tried counter to track potential DECdfs problems.
For more information about mount verification, see Section 3.4.5.
3.4 Using the Client Device
Using a DECdfs client device differs from using a device that is actually local in a few ways, as follows:
The following sections explain these differences in use.
3.4.1 Printing Server-Based Files on a Client
Before you can use the client device from your system, the DECdfs server manager must set up proxy accounts. Each user at your system who accesses files at the server does so through a proxy account or a default account.
Printing operations require special treatment in addition to the usual proxy and default accounts. To print files from the client device, your local SYSTEM account must have proxy access to the server node.
For print access to the server, ask the server manager to implement one
of the suggestions in Section 2.2.4.
3.4.2 User Identification Codes on Server Files
DECdfs Version 1.1 and higher software versions convert server user identification codes (UICs) to client UICs if the proxy account owns files on the server and if you use the /OWNER, /SECURITY, or /FULL qualifier with the directory command. This command displays the correct alphanumeric file owner on a DECdfs device, even when the DECdfs client and server nodes do not have coordinated UICs. However, users might have difficulty performing the following operations when UICs are not coordinated on the DECdfs client and server nodes:
For more information about UICs, see Section 2.2.2.1.
3.4.3 Access Control Lists on Server Files
Access control lists (ACLs) are invalid at the DFSC device; you cannot create or view the ACLs on files that reside at the server. The results of attempting to do so are as follows:
DECdfs may return the error codes shown in Table 3-2 if you attempt to manipulate ACLs through DECdfs.
Error Code | Condition |
---|---|
SS$_NOACLSUPPORT | Occurs when you try to explicitly alter the ACL of a file on a DECdfs client device. |
SS$_NONLOCAL | Occurs when you try to open a journaled file for write access or set a file as journaled or not journaled on a DECdfs client device. |
A variety of conditions can arise on the client or server, or on the network, that affect the outcome of DECdfs operations. When an operation is initiated by a command in DFS$CONTROL, DECdfs is able to diagnose and report any exception conditions using the messages listed in Appendix A. When operations are initiated by general system services, however, the full set of DECdfs error condition codes are not available, and a less obvious, general message may be reported.
For example, if a MOUNT command identifies an access point that is not currently available from its usual server, DECdfs reports the condition as follows:
DFS> MOUNT DEC:.LKG.S.DFSDEV.OUTPOS_XX /NODE=OUTPOS OPX %MOUNT-MOUNTED, DEC:.LKG.S.DFSDEV.OUTPOS_XX mounted on _DFSC1003: %DFS-W-NOTSERVED, Access point is not presently being served |
However, if the same condition is present when a general file access is made, only a general message is reported:
$ DIR OPX:[JONES] %DIRECT-OPENIN, error opening OPX:[JONES]*.*;* as input -RMS-DNF, directory not found -SYSTEM-INCVOLLABEL, incorrect volume label |
The most common of these messages are shown in Table 3-3.
Additionally, you can determine the current status of a DFSC device by using the SHOW CLIENT command in DFS$CONTROL, for example:
DFS> SHOW CLIENT OPX Client Device OPX (Translates to _DFSC1004:) Status = Available Access Point = DEC:.LKG.S.DFSDEV.OUTPOS_XX Node = OUTPOS Free blocks = -1 Access point is not presently being served |
The last line of output gives the specific DECdfs status of the access
point, including any conditions that may make it inaccessible.
3.4.5 DECdfs Mount Verification
When a disk becomes unavailable on an OpenVMS system, the OpenVMS operating system performs mount verification. Mount verification is the process by which the OpenVMS operating system repeatedly attempts to recover from a disk failure or stoppage and to reestablish use of the disk. Similarly, when the client cannot satisfy certain user requests, it performs mount verification to recover from the failure and reestablish DECdfs service.
The client performs mount verification and retries a user request to open a file or search a directory if the request fails for one of the following reasons:
If I/O operations within open files fail for these reasons, DECdfs does not attempt mount verification. Instead, you must close and then reopen any open files. Any operation except CLOSE returns an SS$_ABORT error code. Even if opening a new file restores the link, you cannot use the old file without reopening it.
During the verification process, the client device repeatedly attempts the mount for a short time. If the mount succeeds during that time, mount verification succeeds. A successful mount verification, therefore, means that the original user request succeeds, perhaps with just a delay in response time. If the mount does not succeed during that time, mount verification times out and fails. For example, suppose the manager at the server enters the DFS$CONTROL STOP SERVER command but follows immediately with the START SERVER command. While the server is stopped, client requests fail and mount verification begins. When the server restarts and access points are added again, mount verification succeeds.
Canceling the user operation that triggered mount verification also cancels mount verification. For example, if mount verification starts in response to a DIRECTORY command, and the user presses Ctrl/Y, mount verification stops.
During mount verification, the client sends network class messages to OPCOM, starting with the second try. These messages explain the cause and describe the state of the verification process. The example following, in which the mount verification was caused by an unavailable remote server, shows an OPCOM mount verification message for DECdfs
%%%%%%%%%%% OPCOM 8-JAN-1999 10:17:11.56 %%%%%%%%%%% Message from user DFS_CLIENT DFS server for access point FIN.MYSTRY_DUA1 is not running DFS client mount verification in progress on device _DFSC1: |
The next example reports that an access point was removed at the server:
%%%%%%%%%%% OPCOM 8-JAN-1999 10:18:53.31 %%%%%%%%%%% Message from user DFS_CLIENT DFS client is verifying access point .REDHED.WATSON DFS client mount verification in progress on device _DFSC2: |
If mount verification fails, the application that triggered it receives one of the error codes listed in Table 3-3.
Error Code | Condition |
---|---|
SS$_DEVNOTMOUNT | DECnet or the Communication Entity is unavailable at the client. |
SS$_INCVOLLABEL | The server is running, but the access point is invalid. |
SS$_INVLOGIN | The Communication Entity is unavailable at the server. |
SS$_NOLISTENER | The server is not running. |
SS$_UNREACHABLE | DECnet is unavailable at the server. |
DECdfs supports partially mounted devices so that you enter a MOUNT command only once for a client device, even if DECdfs does not complete the mount because the server is unavailable.
While the device is partially mounted, client requests trigger mount
verification. After the server becomes available, the next mount
verification succeeds, which completes the mount operation and the
client request.
3.5 Performing Checksum Comparisons on DECdfs Connections
DECdfs can provide a layer of data integrity above the DECnet level by performing checksum comparisons. To request or stop checksumming, use the DFS$CONTROL command SET COMMUNICATION/[NO]CHECKSUM.
DECdfs checksum comparisons ensure the integrity of the DECnet link. Whenever DECdfs finds a checksum error, it determines that the DECnet link is unreliable and disconnects the logical link. You can enable and disable checksumming only from a client system; the actual checksum comparison occurs at both the client and server. DECdfs reports a checksum error to the node that detects the checksum error and the node that sent the faulty packet.
When you install DECdfs, checksumming is disabled by default for the following two reasons:
If your network is prone to errors, you should enable the DECdfs checksum option by changing the command in SYS$MANAGER:DFS$CONFIG.COM to SET COMMUNICATION/CHECKSUM. Then monitor OPCOM messages for checksum failures or use the SHOW COMMUNICATION/COUNTER command to check for a nonzero checksum error counter. Whenever you change the network configuration at your site (for example, when you add new network controller boards or Ethernet segments), you can enable checksumming for a short time to check the links again.
Both checksum comparisons and data checks (which you request with the
MOUNT/DATA_CHECK command) test data integrity, but they are very
different. A checksum comparison ensures the integrity of data
traveling between the server and client. A data check ensures the
integrity of data between the disk and the OpenVMS system on the server.
3.6 Printing Files from a Client Device
The MOUNT command entered at the client must include the /SYSTEM qualifier to ensure that the DECdfs device is available systemwide on the client.
If the client is a cluster, the MOUNT command entered at the client
must also include the /DEVICE qualifier. This ensures that all nodes in
the cluster use the same device name to see a particular client device.
Using consistent device names on all cluster members is essential for
successful printing functions. Consistent names allow the print
symbiont to find a file regardless of the node at which the print
command is entered. See Section 3.8 for more information about
mounting DECdfs devices in a cluster.
3.7 Using the OpenVMS Backup Utility with a Client Device
You can use the Backup Utility (BACKUP) to back up files to or from a DFSC device. However, because DFSC devices do not support ACLs, the following limitations exist:
Also note that the BACKUP qualifiers /PHYSICAL, /IMAGE, and /FAST cannot be used with DFSC devices.
For more information on the Backup Utility, see the OpenVMS System Management Utilities Reference Manual.
3.8 Using a Cluster as a DECdfs Client
To use a cluster as a DECdfs client, you must become familiar
with the information in the following sections regarding cluster
aliases and submitting print and batch jobs.
3.8.1 Using Cluster Aliases
At a cluster, it is advantageous to use the cluster alias in outgoing communication with DECdfs servers. Using the cluster identification rather than the individual node identification simplifies management by allowing the server manager to set up proxy accounts according to the cluster alias. This ensures that the user has access to the server from any node in the client cluster. To ensure that DECdfs uses the cluster alias, perform the following steps:
NCP> DEFINE EXECUTOR ALIAS NODE cluster-alias-name |
NCP> CREATE [NODE node-id] ALIAS NCP> CREATE [NODE node-id] ALIAS PORT port-name NODE ID NCP> SET [NODE node-id] ALIAS PORT port-name SELECTION WEIGHT integer NCP> ENABLE NODE ALIAS PORT port-name |
MCR NCP SET OBJECT DFS$COM_ACP ALIAS OUTGOING ENABLED |
MCR NCL CREATE [NODE node-id] SESSION CONTROL APPLICATION DFS$COM_ACP MCR NCL SET [NODE node-id] SESSION CONTROL DFS$COM_ACP OUTGOING ALIAS boolean |
Outgoing requests from your client's Communication Entity then contain
the cluster name instead of the individual node name.
3.8.2 Submitting Print and Batch Jobs
In a DECdfs client cluster, you can submit print and batch jobs on any cluster member's queues if both of the following events have occurred:
It may become necessary to stop DECdfs on your system; for example, if security should be compromised and you need to stop all file access immediately.
Before you stop DECdfs, notify users of your intentions. You can determine whether users are active on a DECdfs client by entering the SHOW COMMUNICATION/CURRENT command and looking for active outbound connections. This procedure does not identify users by name, but you can use the DCL REPLY/ALL command to notify all users on each client.
To stop DECdfs on your system without aborting user file access, enter the DFS$CONTROL command SHUTDOWN COMMUNICATION. This allows existing communication sessions to complete but refuses new requests.
To stop DECdfs operations immediately, use the STOP COMMUNICATION command. Use this command with caution; it immediately aborts current user file operations and stops the Communication Entity and client.
To start DECdfs on your system, run the startup command file SYS$STARTUP:DFS$STARTUP.COM.
Ensure that DECnet is running before you restart DECdfs. Restarting DECnet or restarting the Communication Entity does not restart DECdfs; you must explicitly execute the DECdfs startup command file. |
Previous | Next | Contents | Index |
privacy and legal statement | ||
6548_CPRO_003.HTML |