HP_Data_Cartridge_Server_Component_for_OpenVMS______ Release Notes January 2005 These release notes contain information concerning DCSC functionality and operation . Operating System: OpenVMS I64 Version 8.2 OpenVMS Alpha Versions 7.3-2 and 8.2 OpenVMS VAX Version 7.3 Software Version: DCSC for OpenVMS I64 Version 3.3 DCSC for OpenVMS Alpha Version 3.3 DCSC for OpenVMS VAX Version 3.3 Hewlett-Packard Company Palo Alto, California ________________________________________________________________ © Copyright 2005 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. This document was prepared using DECdocument, Version 3.3- 1b. _________________________________________________________________ Contents Preface................................................... vii 1 OpenVMS ECOs Affecting DCSC Operation 2 New and Changed Features Since DCSC V3.2 2.1 Support for OpenVMS V8.2 and Integrity Servers....................................... 2-1 2.2 Support for ACSLS V6.1 and Later.............. 2-1 2.3 Faster CARTRIDGE EJECTs....................... 2-2 2.4 New Logical Name - DCSC$LOCK_WITH_NODENAME.... 2-2 2.5 DCSC in a Cluster............................. 2-2 2.6 Removed Support for other Library Software.... 2-3 2.7 Support for Panel 0 Drives.................... 2-3 2.8 Support for Terminal Emulators................ 2-3 2.9 UCX REMOVED as a Prerequsite Product.......... 2-3 3 New and Changed Features Since DCSC V3.1 3.1 Support for StorageTek TimberWolf 9740, 9730, and 9710 Libraries............................ 3-1 3.2 Improved Tape Initialization with CARTRIDGE MOUNT......................................... 3-1 3.3 Improved Tape Unloading with CARTRIDGE DISMOUNT...................................... 3-2 iii 4 Significant Features Added Since DCSC V2.0 4.1 Improved Master/Virtual Node Communications... 4-1 4.2 Random Request Timeouts/Failures and Unavailable Devices........................... 4-1 4.3 Random Work Process Failure to Wake/Request Timeouts...................................... 4-1 4.4 Illogical Events and Abnormal Termination of DCSC.......................................... 4-1 4.5 Better DCSC/StorageTek VM Server Communications................................ 4-2 4.6 Assignment of Tape Device Ownership to Master Process....................................... 4-2 4.7 Improved DCSC Recovery of a Newly ONLINE ACS or LSM........................................ 4-3 4.8 Maintaining DCSC Client Lock Id Integrity..... 4-3 4.9 Content and Maintenance of DCSC Logs and Files......................................... 4-4 4.10 Kit Install and Startup Improvements.......... 4-4 4.11 Support DEC TCP/IP Services for OpenVMS....... 4-4 5 Significant Features Added Since DCSC V1.1 5.1 CARTRIDGE ALLOCATE Features................... 5-1 5.2 Reduced Stack Usage........................... 5-1 5.3 Support for ACSLS V3.0 and StorageTek "Clipper Door"......................................... 5-1 5.4 Support for StorageTek "Library Station" Server........................................ 5-2 5.5 Support for the Odetics 3480 Tape Library..... 5-2 5.6 Problems with >100 Scratch Tapes on UNIX Servers....................................... 5-2 5.7 Increased Number of Libraries................. 5-2 5.8 SUBSYS Failure................................ 5-2 5.9 "NULL DEVICE" Return Fixed.................... 5-2 5.10 Restart Log Files Each Night.................. 5-2 5.11 DCSC$SUPERVISOR Crashes....................... 5-3 5.12 Increased Queue Size.......................... 5-3 5.13 Check Entire Configuration for Transport Leveling...................................... 5-3 5.14 ACS Internal Error............................ 5-3 5.15 CARTRIDGE ALLOCATE and DEALLOCATE Drive....... 5-3 5.16 Use of an Input Data File for CART EJECT VOLUME........................................ 5-4 iv 5.17 CARTRIDGE {DISCARD, LIST, RESTORE} SAVESET Other Users Savesets.......................... 5-4 5.18 Allow Multiple Inputs to CARTRIDGE SAVE....... 5-5 5.19 /CACHE=TAPE_DATA.............................. 5-5 5.20 /MEDIA_FORMAT=COMPACTION...................... 5-5 5.21 CARTRIDGE LIST SAVESET/NOOUTPUT Fixed......... 5-5 5.22 Increased Command Strings for CARTRIDGE SAVE.......................................... 5-6 5.23 Additional OpenVMS BACKUP Qualifiers.......... 5-6 5.24 NORMAL Status Added to CARTRIDGE SHOW VOLUME........................................ 5-6 5.25 CARTRIDGE START............................... 5-6 5.26 DCSC_RESERVE_DRIVE and DCSC_RELEASE_DRIVE..... 5-6 5.27 New Logical Names............................. 5-6 5.27.1 DCSC$LOG_CONTROL_nn....................... 5-7 5.27.2 DCSC$SAVESET_LOCK_TIMEOUT................. 5-7 5.27.3 DCSC$CACHE_TAPE_DATA...................... 5-7 5.27.4 DCSC$MEDIA_FORMAT_COMPACTION.............. 5-7 5.27.5 DCSC$LOCK_BEFORE_EJECT.................... 5-8 6 DCSC Upgrade Considerations 6.1 DCSC License.................................. 6-1 6.2 Logical Order of Upgrade...................... 6-1 6.2.1 Shutdown DCSC on All Nodes in the Cluster................................... 6-1 6.2.2 Note changes to SYS$STARTUP:DCSC$STARTUP.COM.............. 6-1 6.2.3 Install DCSC V3.3......................... 6-2 6.2.4 Conversion of DCSC V1.1 Data Files........ 6-2 6.2.5 Verify your Configuration................. 6-2 6.2.6 Start DCSC V3.3........................... 6-2 6.2.7 Upgrade Guidelines for DCSC Application Programs.................................. 6-2 7 DCSC in an OpenVMS (VAX and AXP) Environment 7.1 DCSC Application Software Compiler Options for AXP........................................... 7-1 7.2 DCSC and OpenVMS Device Protection............ 7-1 v 8 Notes to Published Documentation 8.1 Configuration of DCSC Master Server Node...... 8-1 8.2 Viewing DCSC Server Log Files................. 8-1 8.3 Volume Remount Operations..................... 8-1 8.4 Cold Start.................................... 8-2 8.5 StorageTek and DCSC Database Synchronization............................... 8-2 8.6 Multiple Scratch Label Types.................. 8-2 8.7 CARTRIDGE SHOW SAVESET........................ 8-3 8.8 OpenVMS ALLOCATE and CARTRIDGE MOUNT.......... 8-3 8.9 DCSC Authorization/Privilege Levels........... 8-3 8.10 Using DECNET and TCP/IP Together.............. 8-4 8.11 CARTRIDGE EJECT VOLUME file_name.............. 8-4 9 Functional Limitations 9.1 Support for Various ACSLS Versions............ 9-1 9.2 StorageTek UNIX-Based Library Server CAP Priority...................................... 9-1 9.3 StorageTek VM-Based Library Server Limitations................................... 9-1 9.4 StorageTek UNIX-Based Library Server Limitations................................... 9-2 9.5 SHOW VOLUME Limitations With StorageTek UNIX-Based Library Server..................... 9-2 9.6 StorageTek Library Station Server Limitations................................... 9-2 9.7 CARTRIDGE SAVE and RESTORE Limitations........ 9-3 9.8 CARTRIDGE MOUNT Limitation and Workaround..... 9-3 9.9 Multiple ACS Dynamic Device Recovery Limitation.................................... 9-3 9.10 Saveset Names................................. 9-3 9.11 Removal of DCSC Users from the System......... 9-4 9.12 CARTRIDGE ENTER/CARTRIDGE EJECT Timeouts...... 9-4 10 Known Problems 10.1 128 Drive Maximum Per Library................. 10-1 10.2 ACS Server Internal Errors.................... 10-1 10.3 TCPIP Virtual Nodes........................... 10-1 10.4 Potential Robotic Server Dismount Problem..... 10-1 vi _________________________________________________________________ Preface The HP Data Cartridge Server Component for OpenVMS (DCSC) software provides OpenVMS I64, OpenVMS Alpha and OpenVMS VAX computer users with access to a StorageTek Automated Cartridge System (ACS). Depending upon the ACS library and tape drive configuration, data storage is provided in 18-trk, 36-trk, SD-3, or DLT media formats. These release notes supplement information found in the DCSC documentation set. Referenced are OpenVMS Engineering Change Orders that affect DCSC, significant new features, upgrade and migration considerations, modified and corrected documentation notes, software limitations, and known problems with DCSC V3.3. Audience This guide is intended for users of both the DCSC DCL interface (CARTRIDGE) and the DCSC application programming interface. Users of the DCSC application programming interface should take special note of the "Upgrade Considerations" and "Migration Considerations" chapters. Document Structure This guide is structured as follows o Chapter 1, OpenVMS ECOs Affecting DCSC Operation, provides general information regarding OpenVMS ECOs (Engineering Change Orders) required for successful operation of DCSC. o Chapter 2, New and Changed Features Since DCSC V3.2, describes the features that are new or have been changed in HP Data Cartridge Server Component for OpenVMS V3.3 since V3.2. vii o Chapter 3, New and Changed Features Since DCSC V3.1, describes the features that are new or have been changed in HP Data Cartridge Server Component for OpenVMS V3.3 since V3.1. o Chapter 4, Significant Features Added Since DCSC V2.0, describes the significant features added to HP Data Cartridge Server Component for OpenVMS since V2.0. o Chapter 5, Significant Features Added Since DCSC V1.1, describes the significant features added to HP Data Cartridge Server Component for OpenVMS since V1.1. o Chapter 6, DCSC Upgrade Considerations, provides the information necessary to upgrade DCSC from a previous version. System managers and programmers should read this section. o Chapter 7, DCSC in an OpenVMS (VAX and AXP) Environment, provides general information helpful in the operation of DCSC in an OpenVMS (both VAX and AXP) environment. System managers and programmers should read this section. o Chapter 8, Notes to Published Documentation, describes information that has changed in regards to the published DCSC V3.0 documentation set. These items supercede the documentation set. o Chapter 9, Functional Limitations, describes functional limitations that users should be aware of. These limitations are the result of a single product supporting the various types of ACS library servers. o Chapter 10, Known Problems, contains a list of known problems in DCSC V3.3. Related Documentation The following guides, together with this guide, comprise the HP Data Cartridge Server Component for OpenVMS documentation set: o Data Cartridge Server Component Installation Guide-This guide describes how to install the HP Data Cartridge Server Component for OpenVMS software on your OpenVMS system. viii o Data Cartridge Server Component System Manager's Guide- This guide describes how to manage a system after the HP Data Cartridge Server Component for OpenVMS is installed. It describes its environment, how to use the Configuration File Editor to configure the system, and tips for trouble shooting and handling errors. o Data Cartridge Server Component Programmer's Reference Guide-This guide provides a reference for programmers who need to use the DCSC Run-Time Library routines to create applications. It also provides an overview of the routines with coding examples written in the 'C' language. o Data Cartridge Server Component User's Guide-This guide summarizes the main functions of the DCSC, highlighting the functions performed by the computer operator, such as mounting and dismounting tapes. ix 1 _________________________________________________________________ OpenVMS ECOs Affecting DCSC Operation Appropriate ECO kit(s) for SCSI Drivers on OpenVMS platforms should be installed where the StorageTek ACS Library includes SCSI tape drives. Continued successful operation of DCSC requires that all appropriate ECOs be installed as recommended. For further information, Hewlett-Packard customers can contact their normal Hewlett-Packard support channel. OpenVMS ECOs Affecting DCSC Operation 1-1 2 _________________________________________________________________ New and Changed Features Since DCSC V3.2 DCSC Version V3.3 contains a number of enhancements some of which relate directly to problems reported since the release of DCSC Version 3.2. Some of these changes were available in the V3.2A release and are included here for completeness. 2.1 Support for OpenVMS V8.2 and Integrity Servers DCSC Version V3.3 adds support for OpenVMS V8.2, particularly support for OpenVMS I64 running on the Integrity server platform. Customers running in a mixed architecture cluster should refer to the section below titled "DCSC in a Cluster" for information pertaining to setting up DCSC. 2.2 Support for ACSLS V6.1 and Later DCSC Version V3.3 contains changes that allow DCSC to work correctly with StorageTek ACSLS software V6.1, V7.1 and later. Note that DCSC Version V3.3 now provides support for ACSLS "Packet Level 4" communication packets. It is no longer required to reconfigure the StorageTek ACSLS server using the command: "dv_config -p ACSLS_MIN_VERSION" The default value for this variable may be used. If you have previously set this variable to 2 (to support DCSC Version 3.2A) you may either change it or not, as it has no effect when using DCSC Version V3.3. New and Changed Features Since DCSC V3.2 2-1 New and Changed Features Since DCSC V3.2 2.3 Faster CARTRIDGE EJECTs 2.3 Faster CARTRIDGE EJECTs DCSC V3.2A introduced a delay in message handling that caused commands to execute more slowly than in V3.2. This was especially noticable when multiple volumes were being ejected from the library. This delay has been removed and commands should execute as they did in DCSC V3.2. 2.4 New Logical Name - DCSC$LOCK_WITH_NODENAME When performing resource queries on ACSLS, it is often confusing as to which OpenVMS system is actually using a resource because DCSC, by default, uses the username "DCSC" for all lock requests from all systems. A new logical name, DCSC$LOCK_WITH_NODENAME, has been created to alleviate this confusion. If this logical is defined as TRUE, then the SCSNODE name will be used instead of "DCSC". This logical should be defined in the file SYS$STARTUP:DCSC$LOCAL_ STARTUP.COM. 2.5 DCSC in a Cluster As noted in the DCSC System Manager's Guide, DCSC was originally designed to operate on one node within a cluster that is called the DCSC Master Node. DCSC services can be provided to users and applications on any other node in the cluster (DCSC Virtual Node) if the library tape drive devices are accessible to the node and the applications have access to the cluster common system disk. However, this configuration creates a single point of failure within the cluster and it is also difficult to manage across mixed version and/or mixed architecture clusters. Therefore, HP now recommends that you create multiple MASTER nodes within your cluster. To create multiple MASTER nodes, simply install DCSC V3.3 on each of the nodes in the cluster. It is important that each installation use its own private directory and not simply SYS$COMMON:[DCSC$SERVER]. Once installed, you must run CARTRIDGE CONFIGURE on each system, setting up each system as a MASTER node. Unique port ids and maintainance volumes should be used for each configuration. 2-2 New and Changed Features Since DCSC V3.2 New and Changed Features Since DCSC V3.2 2.6 Removed Support for other Library Software 2.6 Removed Support for other Library Software The only currently supported library software is StorageTek's ACSLS. The other library types (such as VM, DECLS, etc.) have been removed. The only server type that is available in the configuration menu is ACSLS. 2.7 Support for Panel 0 Drives The StorageTek L700 library allows drives to be configured in panel 0. The DCSC Version V3.3 configuration editor now allows you to enter the value 0 as a valid panel number. 2.8 Support for Terminal Emulators When using the DCSC configuration editor on various terminal emulators on PCs, users have had difficulty finding the "DO" key. DCSC Version V3.3 has enabled the "F10" key as an alternate to the "DO" key. The "F11" key continues to be the same as the "ESC" key. 2.9 UCX REMOVED as a Prerequsite Product DCSC Version V3.3 no longer verifies the presence of the UCX software during the installation IVP procedure. DCSC Version V3.3 has only been validated with HP TCP/IP for OpenVMS. Other TCP/IP products may work but have not been tested. New and Changed Features Since DCSC V3.2 2-3 3 _________________________________________________________________ New and Changed Features Since DCSC V3.1 DCSC Version V3.3 contains a number of enhancements, some of which relate directly to problems reported since the release of DCSC Version 3.1. 3.1 Support for StorageTek TimberWolf 9740, 9730, and 9710 Libraries DCSC has been modified to support the StorageTek TimberWolf 9740, 9730, and 9710 librries, some of which may be configured with up to twenty drives per panel. Previous standard versions of DCSC supported only four drives per panel. 3.2 Improved Tape Initialization with CARTRIDGE MOUNT DCSC has been modified to support the longer delay experienced when attempting to mount or mount/initialize a cartridge in a SCSI tape drive via the CARTRIDGE MOUNT or CARTRIDGE MOUNT/SYSINIT commands respectively. Specifically, DCSC will delay a maximum of ten minutes (increments of two seconds with a sixty second default) when either device-off-line or medium-off-line is detected while attempting to mount or initialize a tape cartridge. The default time (in seconds) may be changed via a logical as follows: $ define/nolog/system/exe DCSC$MEDOFL_TIMER value_in_seconds Also, DCSC has been modified to report errors encountered when initializing tape cartridges via the /SYSINIT qualifier. Previously, the cartridge could be left in a loaded but not OpenVMS mounted state without any error indication. Currently, DCSC will report any errors. It is the responsibility of the user application or operator to provide a correct response. New and Changed Features Since DCSC V3.1 3-1 New and Changed Features Since DCSC V3.1 3.3 Improved Tape Unloading with CARTRIDGE DISMOUNT 3.3 Improved Tape Unloading with CARTRIDGE DISMOUNT The DCSC CARTRIDGE facility has been modified to decrease the delay experienced when dismounting and unloading cartridges, especially when SCSI tape drives are being utilized. 3-2 New and Changed Features Since DCSC V3.1 4 _________________________________________________________________ Significant Features Added Since DCSC V2.0 Following these are the significant enhancements and problem resolutions since DCSC Version 2.0. They were originally implemented in DCSC V3.0. 4.1 Improved Master/Virtual Node Communications Previous versions of DCSC contained logic that, under varying conditions, compromised message communications between master and virtual nodes. Results on virtual nodes such as SUBSYS failures, loss of communications, and "hung" SLS or third-party backup requests have been traced to this logic which has been corrected in DCSC V3.0. 4.2 Random Request Timeouts/Failures and Unavailable Devices DCSC V3.0 corrects faulty DCSC Supervisor logic that, under a rare set of circumstances, would cause the deletion of a DCSC work process, thereby terminating the function served by that work process. This would have varied effects from the timeout/failure of a request to a device or volume becoming unavailable. 4.3 Random Work Process Failure to Wake/Request Timeouts DCSC V3.0 corrects faulty DCSC Channel Transmit logic that, under a rare set of circumstances, would result in the failure to wake a work process upon receipt of a message response from the StorageTek ACSLS. This would eventually result in a message timeout. 4.4 Illogical Events and Abnormal Termination of DCSC Previous versions of DCSC would occasionally terminate abnormally due to the occurance of an "Illogical Event". This was due to DCSC's improper handling of certain message timeout events and has been corrected in DCSC V3.0 Significant Features Added Since DCSC V2.0 4-1 Significant Features Added Since DCSC V2.0 4.5 Better DCSC/StorageTek VM Server Communications 4.5 Better DCSC/StorageTek VM Server Communications Previous versions of DCSC would not completely process segmented messages during TCP/IP communications with the StorageTek VM server. This has been corrected in DCSC V3.0. 4.6 Assignment of Tape Device Ownership to Master Process Device ownership has been the source of numerous misunderstandings and trouble reports and has been addressed in DCSC V3.0. Previous versions of DCSC normally assigned ownership of a tape device to the process that performed the CARTRIDGE ALLOCATE command (RTL Reserve Drive function) or CARTRIDGE MOUNT command (RTL Mount function) unless that process acted as a surrogate for another process via the /PROCESS=pid switch (node, process_id parameter). Note that a surrogate process was and still is excluded from performing any OpenVMS allocate or mount function. Following OpenVMS ownership rules on mount/dismount (note that OpenVMS rules on allocate device differ slightly) DCSC ownership of a tape device is now always assigned to the master process of a job. If a device is either allocated or mounted on behalf of another process, then the master process of that "another process" becomes, in DCSC's eyes, the device owner. Note that if a device is OpenVMS allocated on a level other than the master process level, then OpenVMS and DCSC device ownership will differ. On versions of OpenVMS prior to V6.1, an ACL error may result if an attempt is made to access a OpenVMS allocated- only (not mounted) device by a process other than the allocating process (even if in the same job). Previous versions of DCSC attempted to mirror that OpenVMS rule. Since OpenVMS V6.1 allows any process within a job to share an allocated device, DCSC has been modified so that it will issue an ACL error only if a process outside the job (owning the allocated device) attempts to access the device. 4-2 Significant Features Added Since DCSC V2.0 Significant Features Added Since DCSC V2.0 4.6 Assignment of Tape Device Ownership to Master Process The great advantage of this modification to DCSC is that any process of a job may, without having to specify /PROCESS=master_process_id, mount a device previously allocated by any other process in the job and/or dismount a device previously mounted by any other process of the job. And since the DCSC ownership of the device remains with the master process, then DCSC will never perform automatic device recovery unless the master process exits. Note - on versions of OpenVMS prior to V6.1, OpenVMS or CARTRIDGE allocation of a device should ALWAYS be done by the master process of a job in order to allow other process in that job steam to have OpenVMS access to the device. 4.7 Improved DCSC Recovery of a Newly ONLINE ACS or LSM DCSC V2.0 attempted to improve the time it took to detect a newly ONLINE ACS or LSM. It did this by increasing the polling rate from every 5 minutes to every 2 1/2 minutes when either an ACS or LSM was detected to be OFFLINE. It did not, however, actually poll an OFFLINE LSM of an ONLINE ACS at that increased rate. DCSC V3.0 has been modified to correct this deficiency. 4.8 Maintaining DCSC Client Lock Id Integrity A client lock id is assigned by the StorageTek UNIX server. It does this whenever a lock request (for either a tape device or volume) received from a client (ex. DCSC) contains a lock id with a zero value. It returns the assigned lock id in the response packet to the client. The client can then use the same lock id when making additional lock requests. The lock id is released by the server when all locked resources (devices or volumes) have been unlocked. In certain race conditions, with no lock id assigned, separate DCSC processes could make lock requests resulting in two locks being assigned. The end result was that the first lock was lost and that resource could not be unlocked or used by DCSC. DCSC V3.0 corrects this problem by locking the configured (via CARTRIDGE CONFIGURE) maintenance volume at DCSC startup. This results in either a new lock being assigned or an old lock being maintained. Op-com warnings and messages will be given if the maintenance volume is not Significant Features Added Since DCSC V2.0 4-3 Significant Features Added Since DCSC V2.0 4.8 Maintaining DCSC Client Lock Id Integrity found in the StorageTek Library. A missing volume will not, however, impact the completion of the startup. Note good procedure is to insure that each DCSC master node is assigned a separate maintenance volume and that each maintenance volume is unlocked at the StorageTek UNIX server prior to any "cold" start of its DCSC master. No unlock action is required for a "warm" start. 4.9 Content and Maintenance of DCSC Logs and Files Inconsistent error logging (DCSC$ERROR.LOG) resulting in some entries that were either truncated, over-written, or had missing data. This has been corrected in DCSC V3.0. At midnight, DCSC V3.0 now purges and rolls over the master/virtual message logs. At that time, the response file directory is also cleaned up via a batch job scheduled by DCSC startup. 4.10 Kit Install and Startup Improvements If the logical DCSC$DIR is defined, then any installed DCSC shareable images are removed before DCSC$DIR is redefined. This eliminates the possibility of old installed shareable images being utilized by a newer version of DCSC. 4.11 Support DEC TCP/IP Services for OpenVMS DCSC V3.2 has been fully tested with DEC TCP/IP Services for OpenVMS. 4-4 Significant Features Added Since DCSC V2.0 5 _________________________________________________________________ Significant Features Added Since DCSC V1.1 Following these are the significant enhancements and problem resolutions since DCSC Version 1.1. They were originally implemented in DCSC V2.0. 5.1 CARTRIDGE ALLOCATE Features CARTRIDGE ALLOCATE now provides three additional features. First, using the /GENERIC qualifier and a tape drive name of "TA90", you can let DCSC allocate you any of the available drives in the silo. For this to work, you *must* have a "maintainance volume" defined in your configuration file. The drive selected will be from the LSM that contains your maintainance volume. Secondly, you may now specify the qualifier "/SYSTEM" to also request DCSC to issue a OpenVMS ALLOCATE command on the drive that has just been allocated by DCSC. Thirdly, you may now specify a logical name to assign to the drive that was just allocated by DCSC. Any of these new features may be used together or separately. 5.2 Reduced Stack Usage The number of pages of stack usage has been reduced by approximately four pages. This will benefit user written applications that are run in elevated context and will have no effect on other users of DCSC. 5.3 Support for ACSLS V3.0 and StorageTek "Clipper Door" DCSC provides support for ACSLS V3.0. The major feature is support for the new StorageTek large capacity door. Complete documentation for Clipper Door will be available in a future field test release. Significant Features Added Since DCSC V1.1 5-1 Significant Features Added Since DCSC V1.1 5.4 Support for StorageTek "Library Station" Server 5.4 Support for StorageTek "Library Station" Server DCSC provides support for the StorageTek Library Station software. The Library Station appears to DCSC as an UNIX server (and should be configured as such). 5.5 Support for the Odetics 3480 Tape Library DCSC in conjunction with the Hewlett-Packard Product DECLS are now capable of controling the Odetics 3480 Tape Library. Please contact your Hewlett-Packard representative for more information about this new library. 5.6 Problems with >100 Scratch Tapes on UNIX Servers DCSC V1.1 contained a bug that caused scratch tape commands to time out if the StorageTek UNIX server had large numbers of tapes in the scratch pool. This problem does not exist in DCSC V2.0 and higher. 5.7 Increased Number of Libraries The number of possible library connections has been increased from four to sixteen. 5.8 SUBSYS Failure A number of changes have been made to more accurately report problems instead of simply reporting a "SUBSYS Failure" (error code 63). 5.9 "NULL DEVICE" Return Fixed Workprocess handling has been changed to report errors correctly rather than reporting success with no information. 5.10 Restart Log Files Each Night DCSC will now create new versions of the DCSC$LOGS:DCSC$CHANNEL_xx.LOG and DCSC$LOGS:DCSC$ORH.LOG each day at midnight. In addition, the versions of the file will be purged as specified in the configuration editor. All log files are now created with larger allocation and extent sizes to reduce the performance problems of a badly fragmented file. 5-2 Significant Features Added Since DCSC V1.1 Significant Features Added Since DCSC V1.1 5.11 DCSC$SUPERVISOR Crashes 5.11 DCSC$SUPERVISOR Crashes A bug has been fixed that resulted in DCSC$SUPERVISOR crashes on systems running software that synchronizes system times across a site. 5.12 Increased Queue Size DCSC V1.1 allowed only 64 requests to be outstanding to the StorageTek server. This has been increased to 128 requests. 5.13 Check Entire Configuration for Transport Leveling Using KCM-44s, it is possible that multiple data paths to a single, physical tape drive, are available. Transport leveling has been improved to consider each path as a separate drive. As a result, all data paths to your tape drives will be used over time. 5.14 ACS Internal Error A common cause of this error has been corrected on configurations that use StorageTek VM servers. This error was generated as a result of an automatic dismount occurring on the VM system and an unexpected status code being returned (CLS 4107). This code is now a valid return code for DCSC. 5.15 CARTRIDGE ALLOCATE and DEALLOCATE Drive This new DCL command (along with a new set of RTL commands) will allow a user to reserve and release resource locks on drive resources. These commands operate similar to the OpenVMS ALLOCATE and OpenVMS DEALLOCATE command, but are effected on the control path rather than the data path. This new feature should be utilized by any sites where drives are shared between systems. Significant Features Added Since DCSC V1.1 5-3 Significant Features Added Since DCSC V1.1 5.16 Use of an Input Data File for CART EJECT VOLUME 5.16 Use of an Input Data File for CART EJECT VOLUME Due to command line restrictions under OpenVMS, it is not possible to list 40 volumes to be ejected through the new Clipper Door. To support ejecting 40 tapes in a single command, DCSC now supports a file qualfier on the CARTRIDGE EJECT VOLUME command. The value must be a valid OpenVMS filename and the file must contain a list of volumes to be ejected, one volume per line. Current DCSC documentation lists a /FILE= qualifier which is incorrect. The correct syntax for the command is: $ CARTRIDGE EJECT VOLUME file_name[,file_name.....] The total number of volumes specified by the file list must be 40 or less. 5.17 CARTRIDGE {DISCARD, LIST, RESTORE} SAVESET Other Users Savesets OpenVMS users whose authorized privileges include OPER as specified in the User Authorization File (UAF) and whose process privileges also include OPER can perform CARTRIDGE LIST SAVESET, CARTRIDGE DISCARD SAVESET, and CARTRIDGE RESTORE commands specifying a saveset created via CARTRIDGE SAVE by another user. The specification of OPER in the UAF is required even if SETPRV is also set due to the way that the Operator Request Handler (ORH) process works. It turns out that this is a requirement only for multi-volume savesets. When the OpenVMS Backup Utility gets to the end of a volume, it issues an operator request which is intercepted by ORH. ORH issues a SYS$GETUAI call to determine what the user's authorized privileges are. OPER needs to be an authorized privilege, but not necessarily a default privilege. Before issuing one of the CARTRIDGE commands to access another user's saveset, however, the OPER privilege must be set for the process (for example, by issuing a $ set process/privilege=OPER command). Other privileges, such as BYPASS, may be required for the process issuing the CARTRIDGE RESTORE command, especially if files and directories are to be created in other users' areas. 5-4 Significant Features Added Since DCSC V1.1 Significant Features Added Since DCSC V1.1 5.18 Allow Multiple Inputs to CARTRIDGE SAVE 5.18 Allow Multiple Inputs to CARTRIDGE SAVE Like OpenVMS BACKUP, CARTRIDGE SAVE now supports multiple input file specifications (comma separated list). This allows the user to select multiple files or disks as input to a single saveset. 5.19 /CACHE=TAPE_DATA The OpenVMS qualifier /CACHE=TAPE_DATA has been added to the CARTRIDGE MOUNT command. Please refer to the OpenVMS documentation for more information about this qualifier. If the system manager prefers that all MOUNT operations are performed with this qualifier as a default, the logical DCSC$CACHE_TAPE_DATA_WRITE should be defined as TRUE. Qualifiers on the command line will take precedence over this logical. 5.20 /MEDIA_FORMAT=COMPACTION The OpenVMS qualifier /MEDIA_FORMAT=COMPACTION has been added to the CARTRIDGE MOUNT and CARTRIDGE SAVE commands. Please refer to the OpenVMS documentation for more information about this qualifier. In the event that /SYSINIT is also specified on the CARTRIDGE MOUNT command line, the volume will also be initialized with /MEDIA_ FORMAT=COMPACTION option. If the system manager prefers that all operations are to be performed with this qualifier, the logical DCSC$MEDIA_ FORMAT_COMPACTION should be defined as TRUE. Qualifiers on the command line will take precedence over this logical. To utilize compaction on your ACS4400 system, you must have the ICRC update applied to your 4480 tape control unit (performed by StorageTek) and you must have either a KCM44 or a TC44 with at least V2.3F (preferably V2.3G) microcode. Without this hardware, the commands will complete, but there will be no data compaction. 5.21 CARTRIDGE LIST SAVESET/NOOUTPUT Fixed Using the above command will now work correctly; no output will be generated but $STATUS will have a valid value. Significant Features Added Since DCSC V1.1 5-5 Significant Features Added Since DCSC V1.1 5.22 Increased Command Strings for CARTRIDGE SAVE 5.22 Increased Command Strings for CARTRIDGE SAVE DCSC V2.0 allows 512 characters in the CARTRIDGE SAVE command line (increased from 256). 5.23 Additional OpenVMS BACKUP Qualifiers All tape related OpenVMS BACKUP qualifiers as of OpenVMS V5.5-2 may be specified on the CARTRIDGE SAVE command line. In addition, to insure upwards compatibility, 3 new qualifiers have been added to CARTRIDGE SAVE. /CQUALIFIER="..." will cause the quoted string to be passed directly to the BACKUP command line as a command qualifier. /IQUALIFIER="..." will append the string to the input file specification. /OQUALIFIER="..." will append the string to the output file specification. 5.24 NORMAL Status Added to CARTRIDGE SHOW VOLUME The output from a CARTRIDGE SHOW VOLUME will now display NORMAL rather than simply nothing if the volume is neither SAVESET nor SCRATCH. 5.25 CARTRIDGE START You may now use this command to invoke the SYS$STARTUP:DCSC$STARTUP.COM procedure. 5.26 DCSC_RESERVE_DRIVE and DCSC_RELEASE_DRIVE In support of the new CARTRIDGE ALLOCATE and DEALLOCATE commands, there are two new RTL calls. Please see the DCSC Programmers Reference Manual for more information on these calls. 5.27 New Logical Names Below are the new logical names that were added in this release. It should be noted that SYS$STARTUP:DCSC$STARTUP.COM should not be modified to define or change logical names as this file is replaced at installation. DCSC now checks for the presence of the file SYS$STARTUP:DCSC$LOCAL_STARTUP.COM and if found, will execute it. It is expected that any logical names that should be changed from their defaults will be placed in this file. 5-6 Significant Features Added Since DCSC V1.1 Significant Features Added Since DCSC V1.1 5.27 New Logical Names 5.27.1 DCSC$LOG_CONTROL_nn To help users resolve network and channel problems, a new logical has been implemented for each library that is configured in the system. This has been implemented by defining the logical with any one or any combination of the four following characters: o X = Log all telegrams sent o U = Log unsolicited telegrams o R = Log all telegrams received o M = Log multi-packet responses The default setting is now no logging, i.e. all logging bits cleared. 5.27.2 DCSC$SAVESET_LOCK_TIMEOUT DCSC uses a locking mechanism on the file DCSC$SAVESETS.DAT to insure that only one process is using it at a time. This locking mechanism will attempt to lock the file for 30 seconds, then it will timeout. A customer has reported that with a large number of saveset records (more than 500-1000) and multiple jobs started at the same time, the timeout will be hit repeatedly. To allow the site manager to adjust this timeout, a logical name has been created. If more than 30 seconds is required, the logical DCSC$SAVESET_ LOCK_TIMEOUT should be defined to the desired number of seconds. 5.27.3 DCSC$CACHE_TAPE_DATA If defined as TRUE, all volumes mounted by the CARTRIDGE MOUNT/SYSMOUNT commands will use the /CACHE=TAPE_DATA qualifier. See the OpenVMS documentation on MOUNT for more information. If the /CACHE qualifier is specified on the command line, it will override this logical. 5.27.4 DCSC$MEDIA_FORMAT_COMPACTION If define as TRUE, all volumes initialized and/or mounted by the CARTRIDGE MOUNT command and volumes used by CARTRIDGE SAVE will use the /MEDIA_FORMAT=COMPACTION qualifier. See the OpenVMS documentation on MOUNT for more information. If the /MEDIA_FORMAT qualifier is specified on the command line, it will override this logical. Significant Features Added Since DCSC V1.1 5-7 Significant Features Added Since DCSC V1.1 5.27 New Logical Names 5.27.5 DCSC$LOCK_BEFORE_EJECT In configurations where a single OpenVMS cluster is the sole user of an ACS4400 library, and the library is serviced by an UNIX StorageTek server, you may disable locking of tapes prior to ejecting them. The locking mechanism is meant to insure that the system that is requesting the eject has access to the tape, but it does add significant time to the overall eject. This locking will be bypassed if the logical name DCSC$LOCK_ BEFORE_EJECT is defined as FALSE. Hewlett-Packard recommends that this feature not be used if you have more than 1 client system accessing the ACS 4400 system. 5-8 Significant Features Added Since DCSC V1.1 6 _________________________________________________________________ DCSC Upgrade Considerations This chapter provides the information necessary to upgrade from a previous version of DCSC. If you are performing a new installation, you may skip this chapter. 6.1 DCSC License When upgrading to DCSC V3.3, the current DCSC license will remain the valid and required license. 6.2 Logical Order of Upgrade The following are the major steps involved in the upgrade and the order in which they should be completed. 6.2.1 Shutdown DCSC on All Nodes in the Cluster To ensure that the data files are not corrupted during their conversion to a new format, you must shut down DCSC on all nodes in the cluster. You may do this by issuing the "$ CARTRIDGE SHUTDOWN" command or "$ @SYS$STARTUP:DCSC$SHUTDOWN.COM" command. Watch for the OPCOM messages to insure that DCSC has shut down completely before proceeding. 6.2.2 Note changes to SYS$STARTUP:DCSC$STARTUP.COM The installation procedure will create a new SYS$STARTUP:DCSC$STARTUP.COM file. Please verify that no site specific changes were made to the existing copy of the file. If there were site specific changes, create a new file, SYS$STARTUP:DCSC$LOCAL_STARTUP.COM and incorporate the changes there. The new startup procedure will search for this file and invoke it at DCSC startup. DCSC Upgrade Considerations 6-1 DCSC Upgrade Considerations 6.2 Logical Order of Upgrade 6.2.3 Install DCSC V3.3 Please refer to the Data Cartridge Server Component Installation Guide for instructions on installing DCSC V3.3 6.2.4 Conversion of DCSC V1.1 Data Files The format of the files DCSC$FILES:DCSC$CONFIG.DAT and DCSC$FILES:DCSC$SAVESETS.DAT was changed in V2.0. If you are upgrading from DCSC V1.1, a conversion program must be executed in order to continue using the data in those files. The installation procedure will run this conversion automatically for you, if you choose. If you are upgrading from DCSC V2.0 or higher, then no conversion is necessary. In the event that you need file conversion but did not select it during the installation of DCSC V3.3, then you must run the file conversion manually before attempting to start DCSC. To do this, execute the file conversion program from a privileged (SYSTEM) account as follows: $ RUN DCSC$EXE:DCSC$FILE_CONVERT.EXE 6.2.5 Verify your Configuration Before you start DCSC V3.3, you should enter the DCSC Configuration Editor (via CARTRIDGE CONFIGURE) and insure that your configuration is set up correctly. 6.2.6 Start DCSC V3.3 You are now ready to start DCSC V3.3. DO NOT perform a "COLD" restart or all existing data files will be superseded. You may start DCSC by invoking the command procedure SYS$STARTUP:DCSC$STARTUP.COM or with the new DCL command "$ CARTRIDGE START". 6.2.7 Upgrade Guidelines for DCSC Application Programs There have been no changes to the DCSC Application Interface. Software utilizing the DCSC Run-Time-Library should run without recompiling or relinking. 6-2 DCSC Upgrade Considerations 7 _________________________________________________________________ DCSC in an OpenVMS (VAX and AXP) Environment This chapter provides general information helpful in the operation of DCSC in an OpenVMS (both VAX and AXP) environment including a method of configuring DCSC in an cluster that includes both VAX and AXP nodes. Also provided are the suggested compiler options for applications that need to link against the AXP DCSC Run-Time-Library and a general discussion of how DCSC users can be affected by the default OpenVMS device protection scheme. 7.1 DCSC Application Software Compiler Options for AXP So that application interface structures would be identical across both OpenVMS VAX and AXP platforms, the OpenVMS AXP version of DCSC was compiled with the following options: /STANDARD=VAXC /NOMEMBER_ALIGNMENT Application software linking against the AXP DCSC Run- Time-Library should insure that references to DCSC request/response packet structures take data alignment into account by compiling in a similar fashion. 7.2 DCSC and OpenVMS Device Protection There are some very strict rules under which tape devices may be accessed through DCSC ( or any other method). Please note the following: The default device protection of a tape device, beginning with OpenVMS V6.1, can be quite different from the previous VAX/VMS default scheme. VAX/VMS Dev Prot S:RWED,O:RWED,G:RWED,W:RWED OpenVMS V6.1 Dev Prot S:RWPL,O:RWPL,G:R,W DCSC in an OpenVMS (VAX and AXP) Environment 7-1 DCSC in an OpenVMS (VAX and AXP) Environment 7.2 DCSC and OpenVMS Device Protection For example under OpenVMS V6.1, a process having the suggested DCSC application privileges of NETMBX and TMPMBX would have the following device status results when attempting a CARTRIDGE MOUNT. $ show device $3$mua80: Device Device Name Status $3$MUA80: (IDANO) Online $ cartridge show device $3$mua80: DEVICE STATUS _$3$MUA80: Avail $ cartridge mount/device=_$3$MUA80: sq0400 %DCSC-E-DCSC_DRVACC_DENIED, Drive access (via ACL) denied $ cartridge show device $3$mua80: DEVICE STATUS _$3$MUA80: Avail $ show device $3$mua80: Device Device Name Status $3$MUA80: (IDANO) Online Note that neither the load nor mount of the device was effected. As we have stated previously, unless security dictates otherwise set the protection for each DCSC configured tape device as follows: $ SET SECURITY/CLASS=DEVICE - /PROTECTION=(S:RWPL,O:RWPL,G:RWPL,W:RWPL) tape_device Otherwise, you will be forced to grant excessive privileges to any process (other than SYSTEM) that requires access to tape devices. 7-2 DCSC in an OpenVMS (VAX and AXP) Environment 8 _________________________________________________________________ Notes to Published Documentation 8.1 Configuration of DCSC Master Server Node When configuring DCSC via the Configuration Editor program, all cluster nodes which are to run DCSC must be configured including the master server node. This is required even though the master server node was entered in the SERVER IDENTIFICATION screen. Furthermore, the master server must be configured as type MASTER in order to satisfy the master server node configuration requirement. All other nodes must be configured as type DECNET. 8.2 Viewing DCSC Server Log Files The DCSC server error and trace log files are stored in a text format, allowing them to be viewed using standard methods (e.g., DCL TYPE, text editors, etc.). These files should not be viewed directly using an editor as some editors cause the file to become locked; this would then interfere with DCSC server logging operations. If the use of an editor is desired, a local copy of the log file should be created using the OpenVMS COPY command. 8.3 Volume Remount Operations The DCSC server supports the remounting of a drive that is currently mounted. The intent of the command is to allow the current owner of the drive to remount it with a new volume without having to relinquish ownership of the device at any point during the process. This command should be used with caution; it can lead to poor performance if the volume being remounted is much further away from the current drive than it is from other available drives. Notes to Published Documentation 8-1 Notes to Published Documentation 8.4 Cold Start 8.4 Cold Start The DCSC "COLD" restart is intended to be performed only once at initial system installation. Performing a cold restart after the system has been running has several side- effects. The most significant ones are as follows: o all reserved volumes are released. o all reserved scratch volumes are released as non-scratch volumes. o all SAVESET records are discarded. o all drives are unlocked and dismounted. ________________________ Note ________________________ COLD restarts are intended only for initial installations. If you are upgrading from DCSC V1.1, please read Chapter 5, Significant Features Added Since DCSC V1.1. ______________________________________________________ 8.5 StorageTek and DCSC Database Synchronization Both the DCSC and the StorageTek servers maintain a database of drive and volume status. No actions should be performed at the StorageTek operator console that affect the state of resources used by DCSC. For example, after mounting a tape via DCSC, it is possible to perform a forced dismount from the StorageTek console. This will result in a loss of database synchronization requiring a restart of the DCSC server. 8.6 Multiple Scratch Label Types The DCSC server does not support the management of scratch volumes having label types other than "IBM Standard". The VM version of the StorageTek server system also supports the following label types: ASCII standard Non-standard Non-labeled 8-2 Notes to Published Documentation Notes to Published Documentation 8.6 Multiple Scratch Label Types Multiple label types are indirectly supported. The site manager should establish separate scratch tape subpools within the StorageTek server for each label type. DCSC users can then request tapes from the appropriate subpool. 8.7 CARTRIDGE SHOW SAVESET The command CARTRIDGE SHOW SAVESET/USER=username does NOT require privileges as stated in the documentation and help files. The default behavior of CARTRIDGE SHOW SAVESET is the same as specifying CARTRIDGE SHOW SAVESET/ALL. 8.8 OpenVMS ALLOCATE and CARTRIDGE MOUNT In the description of the CARTRIDGE MOUNT/SYSINIT command in the Data Cartridge Server Component User's Guide, it states that the /SYSINIT qualifier will not work if the device has been previously allocated. This restriction is no longer valid. /SYSINIT will work correctly on devices that have been allocated with the OpenVMS ALLOCATE command. 8.9 DCSC Authorization/Privilege Levels The Request Authorization Levels section of the DCSC Programmer's Reference Guide incorrectly documents the OpenVMS privileges required to perform DCSC functions. o USER level authority lets general users perform read and write operations. This authority level corresponds to the OpenVMS process privilege mask of TMPMBX. o SYSTEM level authority provides access to the DCSC RTL nonprivileged (USER) functions, and the majority of privileged functions. Privileged access functions are generally associated with configuration and administration of the DCSC server. This authority level corresponds to the OpenVMS process privilege mask of OPER. o SUPERUSER level authority allows full access to the DCSC RTL function set. This authority level corresponds to the OpenVMS process privilege mask of SYSPRV, not SETPRV, as documented. Notes to Published Documentation 8-3 Notes to Published Documentation 8.10 Using DECNET and TCP/IP Together 8.10 Using DECNET and TCP/IP Together If your system is running both DECNET and TCP/IP, you must have the DECNET node name of the MASTER server defined in the TCP/IP database. You must either define the DECNET node name as the primary TCP/IP node name or you must define the DECNET node name as an alias to the primary TCP/IP node name. Failure to do this will prevent DCSC from starting properly, resulting in "GETHOST" errors in the error log file. 8.11 CARTRIDGE EJECT VOLUME file_name Current DCSC documentation lists a /FILE= qualifier which is incorrect. See Section 5.16 for the purpose and correct syntax of this command. 8-4 Notes to Published Documentation 9 _________________________________________________________________ Functional Limitations 9.1 Support for Various ACSLS Versions DCSC V3.3 supports ACSLS V2.1.1, V3.0, V3.1, V4.0, V5.0, and V5.1.1 thru V5.3. DCSC has not been tested with nor does it purport to support any higher version of ACSLS. Support for ACSLS V2.1.1 is accomplished by changing the server type in the configuration file. Support for the other versions is selected by choosing the "UX" server type in the configuration editor. DCSC V3.3 does not take advantage of all of the new features of ACSLS V4.0 and higher. In particular, DCSC does not utilize the new "access control" features. 9.2 StorageTek UNIX-Based Library Server CAP Priority When the StorageTek UNIX-based library server is being used, the CAP priority for each CAP (Cartridge Access Port) should be set to a non-zero value. This will prevent invalid DCSC_CAP_INUSE errors from being returned when the DCSC_ANY_CAP option is used in specifying the target CAP for volume enter/eject operations. Refer to the appropriate StorageTek documentation for instructions on how CAP priorities are established. 9.3 StorageTek VM-Based Library Server Limitations The StorageTek VM-based library server provides no interface through which DCSC can utilize the following commands: o CARTRIDGE ENTER SAVESET o CARTRIDGE ENTER VOLUME o CARTRIDGE EJECT SAVESET Functional Limitations 9-1 Functional Limitations 9.3 StorageTek VM-Based Library Server Limitations o CARTRIDGE EJECT VOLUME 9.4 StorageTek UNIX-Based Library Server Limitations The StorageTek UNIX-based library server provides no interface through which DCSC can utilize the extended subpool qualifiers (/ACS=acs and /LSM=lsm) with the following commands: o CARTRIDGE RESERVE SCRATCH o CARTRIDGE RELEASE SCRATCH o CARTRIDGE SHOW RESERVE The StorageTek UNIX-based library server provides no interface through which DCSC can utilize the qualifier /ALL with the following command: o CARTRIDGE SHOW SCRATCH 9.5 SHOW VOLUME Limitations With StorageTek UNIX-Based Library Server When displaying the volume status (via the CARTRIDGE SHOW VOLUME command) for a nonreserved volume which resides within a library controlled by the StorageTek UNIX-based library server, the SCRATCH status is not returned. 9.6 StorageTek Library Station Server Limitations The StorageTek Library Station server does not support the full functionality of the ENTER and EJECT commands. Library Station will only accept the following DCSC syntax for the ENTER and EJECT commands: CARTRIDGE ENTER VOLUME/ACS=0/LSM=ANY CARTRIDGE EJECT VOLUME/ACS=ANY/LSM=ANY CARTRIDGE ENTER SAVESET/ACS=0/LSM=ANY CARTRIDGE EJECT SAVESET/ACS=ANY/LSM=ANY The StorageTek Library Station server only supports one volume pool. So, the /SUBPOOL qualifier is not valid with Library Station. 9-2 Functional Limitations Functional Limitations 9.7 CARTRIDGE SAVE and RESTORE Limitations 9.7 CARTRIDGE SAVE and RESTORE Limitations The CARTRIDGE SAVE and RESTORE commands accept qualifiers which are passed on to the spawned OpenVMS backup command. CARTRIDGE SAVE currently does not support the following options: /OWNER_UIC /NOREWIND Note that CARTRIDGE SAVE always performs a rewind. CARTRIDGE RESTORE currently does not support the following options: /INTERCHANGE /NOREWIND Note that CARTRIDGE RESTORE always performs a rewind. 9.8 CARTRIDGE MOUNT Limitation and Workaround The CARTRIDGE MOUNT command's primary function is to load the tape volume into a transport. It also performs the OpenVMS Mount command to establish the datapath for the user. The OpenVMS Mount qualifiers /OWNER and /PROTECTION are not supported. The user can get around this limitation by performing the CARTRIDGE MOUNT with the /NOSYSMOUNT qualifier and then performing the OpenVMS Mount command directly. 9.9 Multiple ACS Dynamic Device Recovery Limitation Currently, only one maintenance volume can be configured for each ACS library under the control of DCSC. In a multiple ACS configuration, the maintenance volume might not be able to be passed to the LSM that houses the drive to be recovered. 9.10 Saveset Names DCSC does not support saveset names that end with a period, for example "BKPSAVE.". Functional Limitations 9-3 Functional Limitations 9.11 Removal of DCSC Users from the System 9.11 Removal of DCSC Users from the System Before removing a user account from a system, any reserved tape volumes belonging to that user must be released. DCSC does not support the release of reserved tape volumes belonging to a user that is no longer recognized by the system. 9.12 CARTRIDGE ENTER/CARTRIDGE EJECT Timeouts If the response to an enter or eject command is the message: "%DCSC-E-SUBSYS, 13:0:6:132 - Response has not arrived (timeout occurred)" then there was a timeout during the command and there may be an inconsistency between the DCSC database and StorageTek database. If there is a timeout during a CARTRIDGE EJECT SAVESET command then the saveset should be entered with the CARTRIDGE ENTER SAVESET command and ejected again with the CARTRIDGE EJECT SAVESET command. If there is a timeout during a CARTRIDGE ENTER SAVESET command and the cartridges have not been put into the CAP yet, the CAP door should be opened and closed without putting any cartridges into it. The CARTRIDGE ENTER SAVESET command can then be entered again. If there is a timeout during a CARTRIDGE ENTER SAVESET command and the cartridges were put into the CAP and entered into the StorageTek library, the cartridges should be ejected with the CARTRIDGE EJECT VOLUME command and re-entered with the CARTRIDGE ENTER SAVESET command. 9-4 Functional Limitations 10 _________________________________________________________________ Known Problems 10.1 128 Drive Maximum Per Library DCSC Version V3.3 only supports a maximum of 128 drives per ACS Library, with a maximum of 256 drives. StorageTek ACSLS software allows more than 128 drives to be configured. This limit in DCSC may be removed in a future version. 10.2 ACS Server Internal Errors Due to unusual events on the StorageTek server, DCSC will still receive some ACS Server Internal Errors. Please report all such occurences to the Customer Support Center along with as much information about what the state of the server and DCSC were at the time. 10.3 TCPIP Virtual Nodes As in V1.1, TCP/IP virtual nodes will not start correctly. All virtual nodes should use DECnet to communicate with the Master node. 10.4 Potential Robotic Server Dismount Problem Under certain conditions, devices that are OpenVMS allocated on a DCSC Virtual node may incur a dismount delay or even fail to dismount. Consider the following example: Known Problems 10-1 Known Problems 10.4 Potential Robotic Server Dismount Problem $! All commands performed on DCSC Virtual Node $ ALLOCATE $3$MUA260: $ CARTRIDGE MOUNT/NOSYSMOUNT/DEVICE=$3$MUA260: SQ0070 $! $! Invoke a program that does the following $! MOUNT/FOREIGN $3$MUA260: $! do read or write to tape $! DISMOUNT/NOUNLOAD $3$MUA260: $! $ CARTRIDGE DISMOUNT/NOSYSDISMOUNT $3$MUA260: At the point of the CARTRIDGE DISMOUNT command, the device will still be "OpenVMS loaded" and "OpenVMS allocated" on the Virtual node. DCSC always attempts to insure that a device is "OpenVMS unloaded" before issuing a (physical) unload request to any robotic library server. It does this by assigning a channel to the device and issuing commands to check the device status. If the device status is not "medium offline", then DCSC will issue a command to "unload" the device. If, however, the device is OpenVMS allocated on another node, DCSC will receive a "device already allocated to another user" error code when it attempts to assign a channel. In any case, DCSC will proceed with the unload request to the robotic library server. On the StorageTek VM server, that unload will be delayed (up to 2 1/2 minutes, not including any rewind time) due to the operational procedures of that system. An automatic "operator reply" can and should be configured (on the StorageTek VM server) in order to complete the dismount request for any device that has not been "OpenVMS unloaded". On the StorageTek UNIX -Based server, DCSC issues an unload with the "force" option set. This causes the automatic unload to proceed without delay even if the device has not been "OpenVMS unloaded". On any robotic server that requires the device to be "OpenVMS unloaded", the dismount most likely will fail. Note that this problem only occurs when the device is both "OpenVMS allocated" and "OpenVMS loaded" on a DCSC Virtual node at the time of the DCSC dismount request. 10-2 Known Problems