Hierarchical_Storage_Management(HSM)_for_OpenVMS____ Release Notes These release notes contain change and update information of HSM V4.3. Please read this document in its entirety before installing and using HSM. Revision/Update Information: HSM for OpenVMS V4.3 ________________________________________________________________ January 2005 © 2005 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Proprietary computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. This document was prepared using VAX DOCUMENT, Version 2.1. _________________________________________________________________ Contents Preface................................................... v 1 Changes in HSM V4.3 1.1 Introduction..................................... 1-1 1.2 Summary of Changes for HSM V4.3.................. 1-1 1.3 Problems fixed in this release................... 1-2 1.4 Software and Hardware Requirements............... 1-2 2 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions........... 2-1 2.1.1 HSMV4.3 on OpenVMS V8.2 does not support Remote tape devices................................... 2-1 2.1.2 SMU locate for files with lower case filenames...................................... 2-1 2.1.3 ODS5 Support is limited to files with filename less 256 characters............................ 2-1 2.1.4 Change in day light savings time results in HSM to bug check................................... 2-2 2.1.5 Read Guide to Operations ...................... 2-2 2.1.6 Operating System .............................. 2-2 2.1.7 SMU Command Privileges ........................ 2-2 2.1.8 Common License Problems ....................... 2-2 2.1.9 Mass Shelving Warning ......................... 2-3 2.1.10 HSM VMSCluster Environment .................... 2-3 2.1.11 Magneto-Optical Devices ....................... 2-3 2.1.12 Cache Scanning ................................ 2-4 2.1.13 Restriction on HSM Operations in Magneto-Optical Jukeboxes...................... 2-5 2.1.14 Restoring Disks or Renaming Disk .............. 2-5 2.1.15 Repack Restriction on Tx8xx Magazine Loaders .. 2-6 2.1.16 Open File When Dismounting Disks .............. 2-6 iii 2.1.17 Auto-Unshelve Attribute on Remote Access ...... 2-7 2.1.18 Device Accessibility .......................... 2-7 2.1.19 Disabling and Deleting Devices ................ 2-7 2.1.20 Dedicating Tape Devices ....................... 2-8 2.1.21 TMSCP_Served Tapes ............................ 2-9 2.1.22 Fast Tape Positioning ......................... 2-9 2.1.23 Basic Mode Media Compatibility Checking ....... 2-10 2.1.24 Location of Activity Log ...................... 2-11 2.1.25 Delay in HSM Shutdown or Exit ................. 2-11 2.1.26 Plus Mode Allocated Volume Warning ............ 2-12 2.1.27 SMU May Require BYPASS Privilege .............. 2-12 2.2 Outstanding Known Problems....................... 2-12 2.2.1 HSM supports Shelving of multiple period files (Extended file specification).................. 2-12 2.2.2 Cache File Delete Restriction ................. 2-13 2.2.3 EXCLUDE Processing on HSM DCL Commands ........ 2-13 2.2.4 Restriction on Custom Drive Operation ......... 2-13 2.2.5 Cannot Cancel Open of RMS Indexed File ........ 2-13 2.2.6 DELETE/LOG of Shelved Indexed File ............ 2-14 2.2.7 OpenVMS DUMP Display Error .................... 2-14 2.2.8 Operation Failures When Device Full ........... 2-14 2.2.9 SMU Locate Usage File Size Error .............. 2-14 2.2.10 Cache Usage May Exceed Block Size ............. 2-15 2.2.11 Chained Messages Displayed With % Sign ........ 2-15 2.2.12 SMU Commands and Drive/Volume Status .......... 2-15 2.2.13 Wrong Density Definition in Plus Mode ......... 2-16 2.2.14 Use of Cleaning Tape in Magazine Loaders ...... 2-16 2.2.15 SMU Locate Version Order is Wrong ............. 2-16 2.2.16 Tape Repack and Logical Names ................. 2-17 2.2.17 Diskquota exceeded trigger with rights Identifier..................................... 2-17 2.2.18 Unshelve of a shelved file might fail for a SMU COPY'ed file................................... 2-17 2.2.19 HSM uses VMS Backup from OpenVMS 7.3-2 onwards........................................ 2-18 iv 3 Installation Notes 3.1 HSM Installation................................. 3-1 3.1.1 Installing ABS/MDMS ........................... 3-1 3.1.2 Installing SLS/MDMS 2.9X ...................... 3-1 3.2 Loading Drivers.................................. 3-2 3.3 Installing on OpenVMS Cluster Configurations..... 3-2 3.4 Possible IVP Error on Mixed Clusters............. 3-2 3.5 Catalog Split/Merge Details...................... 3-3 3.5.1 Overview ...................................... 3-3 3.5.2 Split/Merge Phases ............................ 3-3 3.5.3 Cancelation of Split/Merge Operations ......... 3-4 3.6 Tape Repack Details.............................. 3-4 3.6.1 Overview ...................................... 3-4 3.6.2 Requirements .................................. 3-5 3.6.3 Restrictions .................................. 3-5 3.6.4 Logicals ...................................... 3-6 3.6.5 Recommendations ............................... 3-6 v _________________________________________________________________ Preface Purpose of this Document This document describes: o Changes in this release o Features of the HSM software o Information about using HSM o Known problems and restrictions with the software o Installation information You can read the on-line release notes after completing the installation procedure by entering one of the following commands: $ TYPE SYS$HELP:HSM043_RELEASE_NOTES $ PRINT/PARAMETER=DATA=POSTSCRIPT - SYS$HELP:HSM043_RELEASE_NOTES.PS Intended Audience These release notes are intended for experienced OpenVMS system managers and should be used with the System Management Subkit of the OpenVMS documentation set. HSM Product Summary HSM for OpenVMS, HSM V4.3 is a software product that provides the following functions: o Automatic migration of dormant files to secondary storage o Policy-driven and/or manual operations v o Transparent migration of files to primary storage upon file data access o Complete user and system management control of the HSM environment The HSM documentation set contains the following manuals: o HSM Guide to Operations, HSM Installation Guide and HSM Command Reference Guide vi 1 _________________________________________________________________ Changes in HSM V4.3 1.1 Introduction HSM V4.3 is a new product version of HSM for OpenVMS. This release contains new features as well as bug fixes. ______________________ IMPORTANT ______________________ Do not run HSM V4.3 with HSM V4.0A (BL42) and lower versions of driver installed, doing so may crash your system. ______________________________________________________ This kit contains the following savesets: - HSM043.A - Common files and release notes - HSM043.B - VAX executables - HSM043.C - OpenVMS V7.2 Alpha and OpenVMS V7.3 Alpha executables. - HSM043.D - OpenVMS V8.2 Alpha executables - HSM043.E - OpenVMS V8.2 I64 executables 1.2 Summary of Changes for HSM V4.3 The changes effected in this release are as follows: - Qualification for the OpenVMS I64 Version 8.2. - Qualification for the OpenVMS I64 Alpha Version 8.2. Changes in HSM V4.3 1-1 Changes in HSM V4.3 1.3 Problems fixed in this release 1.3 Problems fixed in this release HSM ACE is not updated by REPACK operation for files if the date fields are modified. When the files revision date/backup date/ expiration date is modified, the repack operation updates the catalog with the new entry but the ACE of the file is not updated. This was raised in CFS.103808 and is fixed in this release. 1.4 Software and Hardware Requirements HSM V4.3 runs on OpenVMS Alpha V7.3-2 & V8.2, on OpenVMS VAX V7.3 and on OpenVMS I64 V8.2. ______________________ IMPORTANT ______________________ Media, Device and Management Services (MDMS), provides essential services for HSM Plus mode operation. If you are also using the "backup via shelving" feature, see below, you will need to pay attention to the MDMS /media management option used. Users should take advantage of the latest enhancement and newly qualified devices with the latest version of MDMS(V4.3). Refer to separate MDMS release notes MDMS043, included as softcopy in MDMS V4.3 of this kit. ______________________________________________________ 1-2 Changes in HSM V4.3 2 _________________________________________________________________ Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions This section documents restrictions and requirements for correct HSM operation. It also describes possible unexpected behaviors that you might observe while using HSM, and offers recommendations on usage. 2.1.1 HSMV4.3 on OpenVMS V8.2 does not support Remote tape devices HSM uses the Remote Device Facility (RDF) portion of MDMS for remote tape devices, which are not directly connected within the cluster. Since RDF is not supported with MDMS V4.3 on OpenVMS V8.2 Alpha and I64, HSM V4.3 will not be able to use Remote tape devices. 2.1.2 SMU locate for files with lower case filenames The command $smu locate does not support lower case filenames. As a workaround customers can do a lookup with fid(smu locate */fid=(file_id)) for lower case files. This is a known problem and restriction with this release. This will be addressed in the next release of HSM. 2.1.3 ODS5 Support is limited to files with filename less 256 characters All HSM operations on ODS5 disks are supported only for files with filename less than or equal to 255 characters. Shelving operations on files with filenames more than 255 characters might result in abnormal behavior. Future versions of HSM will be enhanced to filenames with more than 255 characters. Known Problems and Restrictions 2-1 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions 2.1.4 Change in day light savings time results in HSM to bug check Change in daylight savings time results in HSM process to bug check. This has been reported as IPMT 99785. HSM Engineering is working on fixing this problem and the fix will be made available in future versions of HSM. 2.1.5 Read Guide to Operations Although it is common for software to be installed and activated before reading the associated documentation, HP strongly recommends that you read the HSM Guide to Operations before activating this product. HSM transparently moves user data between online and near-line /offline storage, and you should have a full understanding of how and when this occurs before activating the product. 2.1.6 Operating System This version of HSM requires OpenVMS VAX V7.3 or OpenVMS Alpha V7.3-2, V8.2 or OpenVMS I64 V8.2. 2.1.7 SMU Command Privileges Use of SMU commands now requires SYSLCK in addition to SYSPRV and TMPMBX privileges for all commands. Additional privileges required for specific commands are noted under that command description in the Guide to Operations. 2.1.8 Common License Problems If you receive the following error: %HSM-E-UNEXPERR, unexpected error on operation consistently on PRESHELVE or SHELVE commands, it may indicate a problem with the licensing software on your OpenVMS system, specifically the image SYS$LOADABLE_ IMAGES:SYSLICENSE.EXE. You may need to re-install this image from the OpenVMS installation kit for HSM to proceed. 2-2 Known Problems and Restrictions Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions 2.1.9 Mass Shelving Warning Even though high-water-mark detection, occupancy and diskquota operations are disabled by default, you should take care when the time comes to enable them. If you enable these operations on the default volume, it could cause a mass shelving of files down to the low-water- mark of 80% on all disks in your cluster. While this is a desirable feature, it is recommended that this be done in a controlled manner. We recommend that you enable operations on a volume-by-volume basis when you wish to allow the operation to proceed. $ SMU SET VOLUME/ENABLE=(HIGHWATER_MARK, OCCUPANCY, QUOTA) - volume_name We also recommend that you begin with a high low-water- mark value, and gradually lower it until you have achieved a balance in your storage management operations. You can always use the SMU RANK command to determine, in advance, the files that would be selected when a policy runs on a volume. 2.1.10 HSM VMSCluster Environment All nodes in an OpenVMS cluster must run HSM in order to receive full service when a shelved file is accessed. If this is not done, users on nodes not running HSM have no access to shelved file data. If you are running in HSM Plus mode, all nodes specified as shelf servers need to be running MDMS V4.3 or SLS V2.9G. In addition, any non-HSM (remote) node qualifiying an HSM device also needs to run MDMS V4.3 or SLS V2.9G. If non- cluster accessible devices are defined, remote tape device qualification must be specified for SLS/MDMS. 2.1.11 Magneto-Optical Devices MO devices used as cache can be made exempt from license capacity scanning (which would load all platters once per week). To disable capacity scanning on MO devices, you need to either: o Disable shelving and unshelving on the default volume, and use specific volume records to enable HSM operations on specific disk volumes. If you do this, you do not Known Problems and Restrictions 2-3 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions need to enter specific SMU SET VOLUME commands for the MO (JB) volumes, -OR- o Enable HSM operations on the default volume, and enter specific SMU SET o VOLUME commands for the MO volumes which disable both shelving and unshelving. For example: Option 1: $ SMU SET VOLUME/DEFAULT/DISABLE=ALL $ SMU SET VOLUME $1$DKA100:/ENABLE=ALL $ SMU SET VOLUME $1$DKA200:/ENABLE=ALL $ SMU SET VOLUME $1$DKA300:/ENABLE=ALL Option 2: $ SMU SET VOLUME/DEFAULT/ENABLE=(SHELVE,UNSHELVE) $ SMU SET VOLUME JBA0:/DISABLE=ALL $ SMU SET VOLUME JBA2:/DISABLE=ALL 2.1.12 Cache Scanning For each cache device, HSM maintains a current cache total to determine if the cache is full according to the specified block size. Occasionally, it is necessary to perform a cache scan by reading all files in the cache directory and totaling the device. The scan is initiated on first access to the cache when HSM starts up, or later if the cluster-wide data structure holding the cache total becomes invalid. While cache scanning is necessary, it can potentially take a long time for caches with many thousands of files, particularly on magneto-optical devices. As such, you can disable cache scanning by dedicating the entire disk, or magneto-optical platter, for the HSM cache. When you do this, cache scanning is avoided, because the total blocks- in-use obtainable from the system is used as the effective cache total. When dedicating the disk or platter for the HSM cache, it is recommended that you do not use it for other purposes. To enable dedicated HSM cache, specify a block size of zero as in the following example: $ SMU SET CACHE $1$JBA0:/BLOCK_SIZE=0 2-4 Known Problems and Restrictions Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions 2.1.13 Restriction on HSM Operations in Magneto-Optical Jukeboxes For performance reasons, it is not a good idea to mix HSM cache platters and HSM-enabled platters in the same single- drive optical jukebox. This is because the act of shelving /unshelving to/from the cache platters will cause a load of a different platter for each I/O (just as copying a file in one platter to another will cause the same effect). If you are using a magneto-optical jukebox for HSM cache, you should disable shelving and unshelving on all platters in the jukebox (including the cache platters). It is alright to enable HSM operations on magneto-optical platters in other jukeboxes not containing HSM cache platters. 2.1.14 Restoring Disks or Renaming Disk Since HSM allows backing up preshelved and shelved file headers only, it is now possible to restore such files from non-image backup tapes. When such a restoration takes place, the file identifiers of the (pre)shelved files may change. Since HSM uses the file identifier as a primary key to relate the online file header with the shelved file data, it is necessary to repair the catalog to reflect the new file identifiers. Also, when renaming a disk from one physical device name to another, it is also necessary to run the analyze/repair utility to correct the physical device name in the catalog. For example, if DISK$USER1 used to be $1$DKA100:, but is changed to $2$DKA200:, the analyze/repair utility will correct all entries in the catalog to refer to the new name. Therefore, when restoring or renaming any disk with preshelved or shelved files, the following command should be run on the device after the restore/rename but before the device is put into service: $ SMU ANALYZE/REPAIR new_device_name: Known Problems and Restrictions 2-5 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions 2.1.15 Repack Restriction on Tx8xx Magazine Loaders In Plus Mode, SMU REPACK does not correctly handle tape volume sets in a TX8xx magazine loader, unless the volumes are "in order" in the magazine. You should attempt to place HSM volumes in the order of allocation in a magazine loader, both for Basic Mode and Plus Mode. If the order is not known, you may have to physically manipulate the volumes using MRU or the front panel during a repack operation. These restrictions do not apply to normal HSM operations, or to large tape jukeboxes such as the TL8xx or TL82x series. There is no additional charge for plus mode. 2.1.16 Open File When Dismounting Disks HSM keeps a file open on all disks on which HSM operations are enabled, even if the disk is not specified in an SMU SET VOLUME command, but operations are enabled on the default volume. HSM also keeps the same file open on cache disks. With the file open, the disk cannot be dismounted. Please enter the following commands to allow the file to be closed and the disk dismounted: $ SMU SET VOLUME device_name/DISABLE=ALL (on normal disks) $ SMU SET CACHE device_name/DISABLE (for immediate closing of the file) $ SMU SET CACHE device_name/DELETE (for closing after a cache flush) The open file is named [000000]HSM$UID.SYS. Because of the way HSM accesses the file, the file is not displayed in the OpenVMS SHOW DEVICE/FILES command. Therefore, if you cannot dismount a disk because of an open file, and the file is not displayed in SHOW DEVICE/FILES command, follow these procedures to close the file. When the requested disk is the root disk of a volume set, the DCL DISMOUNT command attempts to dismount all the volumes in the volume set. HSM may potentially have a [000000]HSM$UID.SYS file open on each member volume in the volume set, which needs to be disabled individually. 2-6 Known Problems and Restrictions Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions For example, if you are attempting to dismount volume $1$DKA100:, where that disk is the root volume of a volume set containing $1$DKA100:, $1$DKA200:, and $1$DKA300:, issue the following SMU commands to close the UID file: $ SMU SET VOLUME $1$DKA100:/DISABLE=ALL $ SMU SET VOLUME $1$DKA200:/DISABLE=ALL $ SMU SET VOLUME $1$DKA300:/DISABLE=ALL You may also need to disable the cache (if applicable) on each volume as well. 2.1.17 Auto-Unshelve Attribute on Remote Access The SET PROCESS/[NO]AUTOUNSHELVE attribute controls how a process handles access to a shelved file, either generating a file fault or an error. If a user accesses a shelved file from a remote system, however, this attribute is not honored. In this case, a file fault always occurs. This is because the file fault occurs in the context of the FAL process on the HSM system, which has /AUTOUNSHELVE enabled by default. 2.1.18 Device Accessibility All devices designated as cache devices for HSM must be accessible and system-mounted on all nodes in the cluster that are running HSM. All devices designated as nearline/offline devices for HSM must be accessible on all nodes designated as shelf servers. You should not manually mount any tape devices (using the OpenVMS MOUNT command) to qualify HSM. OPCOM messages will guide all physical tape operations. HSM performs the OpenVMS-mount of all devices. 2.1.19 Disabling and Deleting Devices HSM V4.3 provides enhanced compatibility for deleting and disabling HSM tape drives with new or pending operations. To remove a device from HSM service, a device may be disabled or deleted. The disposition of new and pending operations to the device is as follows: o If the device is disabled, new and pending operations are requeued to other devices qualified by the Known Problems and Restrictions 2-7 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions associated archive classes. If no other devices are currently available, the operations remain queued until a suitable device is made available. Operations are not canceled or switched to other archive classes. You should disable a device when taking it out of service for a short time. o If the device is deleted, new and pending operations are requeued to other devices qualfied by the associated archive classes. If no other devices are defined in SMU for the archive class, the operations are canceled. In the case of an unshelve request, an attempt is made to unshelve from an alternative archive class using a different device (if defined), otherwise the request is canceled. In the case of a shelve request or cache file flush, the request is canceled. You should delete a device when taking it out of service for a long time, or permanently. Ideally, you should first define an alternative device for the archive class to avoid canceling requests. There is a delay of up to one minute before disabling or deleting a tape drive specified with SMU SET DEVICE/DISABLE or /DELETE, and having the associated volume dismounted. There is a similar delay for a dismount on a shared drive after the last HSM operation. 2.1.20 Dedicating Tape Devices HSM V4.3 provides enhanced compatibility for dedicated HSM tape devices, and this mode is recommended for faster unshelve response time. Dedicated devices are now partially qualified in Plus Mode. When you dedicate a device, a tape volume remains loaded and mounted in the device until HSM receives a request requiring another volume on the device. In many cases, requests for unshelves can be made on the currently mounted volume. For example, on a TZ877 device (or a TL810/TL820 containing TZ87 drives), a file fault from a mounted tape takes about 90 seconds or less. If that tape had to be loaded and mounted, the same file fault would take from 3-5 minutes. 2-8 Known Problems and Restrictions Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions In Basic Mode, a dedicated device is permanently allocated to HSM. In Plus Mode, a dedicated device must be de- allocated sometimes to allow MDMS to perform its drive selection algorithm. As such, Plus Mode dedicated drives could be used by other applications. To avoid this, you can define a unique media type for the device that only HSM uses, and you effectively "reserve" the device to HSM even if it is not always allocated. 2.1.21 TMSCP_Served Tapes With OpenVMS V6.2, TMSCP-served SCSI tape drives across a VMScluster environment were made compatible. For standalone tapes drives, this qualification allows any HSM shelf server in the cluster to use the drive, and the SMU server designation can be specified as "any cluster member". However, for robotically-controlled tape jukeboxes and magazine loaders, cluster-wide compatibility for the robot operations is not provided. This means that the robot load /unload commands must be issued on the node on which the device resides, meaning that the shelf server must also be defined to be that node. This applies to both Basic and Plus modes. Moreover, in Plus mode, the Remote Device Facility cannot be used for TMSCP-served devices in this release, since MDMS considers the drives to be "local". Please note that these restrictions do not apply to SCSI (or other) tape jukeboxes and magazine loaders connected to the HSJ and HSD controllers. These are truly cluster-wide for both drive and robot operations. In these environments, any HSM shelf server on the cluster can access the devices. 2.1.22 Fast Tape Positioning HSM V4.3 contains algorithms for fast tape positioning on certain tape devices, by using tape device firmware interfaces rather than the standard OpenVMS positioning calls. In some cases, this results in positioning speed increases of up to two orders of magnitude. It may be necessary to revert to the normal OpenVMS positioning algorithm in order to qualify for third party devices. You can do this by entering the following command: $ DEFINE/SYSTEM HSM$NO_FAST_TAPE 1 Known Problems and Restrictions 2-9 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions To change back to fast tape positioning, deassign the logical name. Changes affect the next positioning operation without the need to restart HSM. The command should be entered for the shelf server node, or across all nodes using the SYSMAN facility with cluster environment. ________ Restrictions on Fast Tape Positioning ________ Fast tape positioning is not yet available for remote tape devices in Plus mode. It is also not qualified for SCSI devices connected to an HSC controller (K.SCSI). However, you do not need to specify this logical name for correct operation on these devices. Fast tape positioning is qualified for HSJ and HSD controllers, however. ______________________________________________________ 2.1.23 Basic Mode Media Compatibility Checking In Basic Mode, HSM determines the media type by obtaining device information from the system, and applying a standard media type for each qualified device. However, when a new device is introduced, its device information may not be qualified by OpenVMS, resulting in a generic media type being generated. If the site then upgrades to a new OpenVMS version which does qualify the device, HSM may deduce that the media type information stored in the catalog is not compatible with the "current" media type, and may refuse to unshelve files. To avoid this situation, HSM allows you to disable media compatibility checking, which allows unshelving from "nominally incompatible" devices. To disable media compatibility checking: $ DEFINE/SYSTEM HSM$NO_CHECK 1 on all nodes eligible to be shelf servers on the OpenVMS cluster. This does not apply to Plus Mode where the media type is defined by the system manager. 2-10 Known Problems and Restrictions Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions 2.1.24 Location of Activity Log When you perform an SMU SHOW REQUESTS/FULL command, any resulting activity log file is always created at HSM$LOG:HSM$SHP_ACTIVITY.LOG, regardless of any other considerations. A new version of this file is created for each SMU SHOW REQUESTS/FULL request, but only if at least one request is active on the requesting node. The activity log file is not created at the location specified by the /OUTPUT qualifier for SMU SHOW REQUESTS. This qualifier, as with all SMU SHOW commands, diverts only the screen output to the specified file. The activity log file (which is not screen output) is not affected by this qualifier, and is always written to the latest version of HSM$LOG:HSM$SHP_ACTIVITY.LOG. Also, a new file is not created if there are no outstanding requests on the node from which the request was activated. The SMU SHOW REQUESTS/FULL command indicates whether a new log file was created, and its location. The activity log is not updated upon completion of requests. You must always enter SMU SHOW REQUESTS/FULL to examine the current activity on the node and make the activity log current. 2.1.25 Delay in HSM Shutdown or Exit In order to maintain system and data integrity, HSM will not complete a SHUTDOWN operation if it has BACKUP or tape positioning operations outstanding. This applies to all variations of the SHUTDOWN command. You may experience a delay of the shelf handler process (HSM$SHELF_HNDLR) exiting for up to five minutes, even using SMU SHUTDOWN/NOW or SMU SHUTDOWN/FORCE. This is intended behavior. Of course, if you issue SMU SHUTDOWN without any qualifiers, the delay may be significantly longer because HSM waits for all pending operations to complete before initiating the shutdown. HP recommends that you use SMU SHUTDOWN to shut down HSM manually. However, in a command procedure to shutdown the system, you should use SHUTDOWN/NOW. Known Problems and Restrictions 2-11 Known Problems and Restrictions 2.1 Usage Recommendations and Restrictions 2.1.26 Plus Mode Allocated Volume Warning When HSM allocates a volume from the SLS/MDMS pool, the volume should remain allocated indefinitely. This is especially important as HSM uses tape volume sets within archive classes to handle continuation of files from one tape volume to the next. Under no circumstances should HSM- allocated volumes be freed using the STORAGE DEALLOCATE or MDMS DEALLOCATE volume command, otherwise the volume set relationship could be lost. A future release of HSM may provide tape volume consolidation and compaction. 2.1.27 SMU May Require BYPASS Privilege If the HSM is installed from an account other than SYSTEM, it may be necessary for system managers to enable the BYPASS privilege before all SMU commands will execute successfully from the SYSTEM account. 2.2 Outstanding Known Problems The following known problems apply to the HSM V4.3 release. These will be fixed in a future release. 2.2.1 HSM supports Shelving of multiple period files (Extended file specification) HSM supports Shelving of multiple period files (Extended file specification)from a ODS-5 disks to a cache device of ODS-2 DISKS. For e.g: A file with multiple period like "a^.b^.c^.d^.dat " can be shelved to the cache device of ODS-2. The file will have the cache filename as "abcd.dat ". With this cache filename, HSM shelves the file to the cache device (ODS-2 disk). The file can located using the following command: $SMU LOCATE */FID= This is a known problem and will be fixed in the future release of HSM. 2-12 Known Problems and Restrictions Known Problems and Restrictions 2.2 Outstanding Known Problems 2.2.2 Cache File Delete Restriction When a cache is set to /NOHOLD, a delete of a shelved file will cause the associated cache file(s) to also be deleted. Unfortunately, in this release, the cache file is not deleted if a preshelved file is deleted. The cache file will be deleted if the preshelved file is modified, however, or if an unpreshelve operation is issued on the file. This will be fixed in a future release. 2.2.3 EXCLUDE Processing on HSM DCL Commands If you use the /EXCLUDE qualifier on the SHELVE, PRESHELVE, UNPRESHELVE or UNSHELVE commands, it uses the users default device /directory to perform the exclude if one was not specified. Most other DCL commands apply the device /directory given in the original parameter list, rather than the users default directory. 2.2.4 Restriction on Custom Drive Operation In Plus mode, if you enable a tape drive for just shelving or just unshelving, there may be conflicts between HSM and MDMS about choosing an appropriate drive for an operation. Requests will appear to hand with no OPCOM messages; they can be recovered by enabling both shelving and unshelving on the device. As a workaround for this release, you should enable all operations on all tape drives. 2.2.5 Cannot Cancel Open of RMS Indexed File If you perform an OPEN on an RMS indexed file, a file fault is generated because some of the prologue information about the file is stored in the file data, rather than the file header. As such, an OPEN on such a file causes a file fault. In addition, this file fault cannot be canceled with Ctrl /Y, because DCL and RMS do not qualify canceling a file open operation. This restriction will remain until new qualification for canceling OPEN operations is supplied with OpenVMS. Known Problems and Restrictions 2-13 Known Problems and Restrictions 2.2 Outstanding Known Problems 2.2.6 DELETE/LOG of Shelved Indexed File If you enter a DELETE/LOG of a shelved, RMS-indexed file, OpenVMS causes a file fault on the data, since this command issues an RMS open. In turn, the open requires use of prologue data which is stored in the data section of the file rather than the header. To avoid file faults when deleting shelved indexed files, use DELETE/NOLOG (the default) instead. 2.2.7 OpenVMS DUMP Display Error If you issue a DUMP/HEADER of a shelved file, the shelving Application Control Entry is printed out incorrectly. The last several bytes are printed out in ASCII instead of hexadecimal format. Depending on the characters typed, this may alter the terminal's characteristics or hang the terminal. This problem has been reported to OpenVMS engineering, and will be fixed in a future release of OpenVMS. 2.2.8 Operation Failures When Device Full If the shelf handler process encounters a device full situation during its own operation, the request generating the condition will fail. For this release, the shelf handler cannot service make-space requests generated by itself. You can avoid this problem by maintaining a policy on the device that avoids device-full conditions. 2.2.9 SMU Locate Usage File Size Error On some files (for example, RMS indexed files), the SMU LOCATE command will display the wrong size for the usage size of the file, although the allocation size is correct. The usage size displayed may be one block larger than the actual usage size, and in some cases may be one block larger than the allocation size. This is just a display error, which has no other impact on HSM operations. 2-14 Known Problems and Restrictions Known Problems and Restrictions 2.2 Outstanding Known Problems 2.2.10 Cache Usage May Exceed Block Size In some cases, HSM may exceed the specified block size on a cache disk. There is a small window of time when multiple cache operations miscalculate the usage of the cache, and may exceed it by a small amount. If this occurs, a cache flush is initiated, and the extra usage is usually not significant. 2.2.11 Chained Messages Displayed With % Sign The OpenVMS convention to chain error messages with a '-' symbol is not honored. The following is an example of that convention: $ TYPE POPOPO.LIS %TYPE-W-SEARCHFAIL, error searching for DISK$USER1:[SMITH]POPOPO.LIS; -RMS-E-FNF, file not found Note that the second message is chained and begins with a '-'. For HSM V1.0A, all chained messages begin with a '%' instead. The following is an example of a chained message with HSM: $ SHELVE FOO.DAT %SHELVE-W-ERROR, error shelving file DISK$USER1:[SMITH]FOO.DAT;1 %HSM-E-INELIGSHLV, file $9$DKA0:[SMITH]FOO.DAT;1 is ineligible for shelving 2.2.12 SMU Commands and Drive/Volume Status The SMU commands SET CACHE, SET DEVICE, and SET VOLUME have built-in checks to see whether the referenced entity is available for use. For example, if an offline drive is specified in an SMU SET DEVICE command, the code checks to see if the drive is accessible on the issuing system. If not, the SMU command fails. Note that the command with the /DELETE qualifier always works, in case a device has been removed from the system. To avoid this problem, do the following: o Enter all SET DEVICE commands on a node that has visibility to the device. For example, enter the command on a shelf server node. Known Problems and Restrictions 2-15 Known Problems and Restrictions 2.2 Outstanding Known Problems o Enter all SET CACHE and SET VOLUME commands when the specified device is accessible with the appropriate volume mounted. The problem does not occur on remote tape devices when the /REMOTE qualifier is issued. 2.2.13 Wrong Density Definition in Plus Mode If you accidentally define the wrong density for an archive class in Plus mode, and you attempt to shelve to the archive class, then HSM may modify the density in the SLS volume record to the density defined in the archive class. This is an unfortunate side effect of the SLS ALLOCATE operation. If this should happen, enter a STORAGE SET VOLUME command to set the volume density back to the correct value. Please note that the archive class density must exactly match the density specified in TAPESTART.COM. If there is no density, it too must match to nothing in both places. 2.2.14 Use of Cleaning Tape in Magazine Loaders For HSM V4.3, cleaning tapes are qualified in Tx8xx magazine loaders if the loader has firmware V4.0A or later installed. However, it is recommended that cleaning tapes are not present in any magazine used by HSM. Please verify the firmware revision level on Tx8xx drives before using a cleaning tape in an HSM magazine loader. If you need to use a cleaning tape, please use it manually. 2.2.15 SMU Locate Version Order is Wrong When displaying multiple files with the same name but different version numbers, SMU LOCATE displays them in alphabetical order, rather than numeric order. For example, TEST.TMP;1 is followed by TEST.TMP;10, rather than TEST.TMP;2. 2-16 Known Problems and Restrictions Known Problems and Restrictions 2.2 Outstanding Known Problems 2.2.16 Tape Repack and Logical Names HSM tape repack requires temporary disk space defined either by the HSM$MANAGER (default) or HSM$REPACK system-wide logical names. Errors will occur during tape repack if either of these names is a rooted logical name. For example, HSM$REPACK should NOT be defined as HSM$ROOT:[REPACK], when HSM$ROOT is defined as the rooted string DISK$SERVER:[HSM.], and resulting translation of HSM$REPACK is DISK$SERVER:[HSM.][REPACK]. 2.2.17 Diskquota exceeded trigger with rights Identifier The user diskquota exceeded trigger is an event that occurs when a process requests additional online storage space that would force it to exceed the allowable permanent disk quota. The shelving process selects to shelve files owned by the owner of the file being created or extended. This trigger is independent of the owner of the process that extends the file; only the file ownership is significant. For example, if user A creates a file, and user B extends the file beyond user A's disk file quota, user A's files will be shelved. However the following restriction applies to the above functionality: On Alpha,a remedial BACKUP.EXE and BACKUPSHR.EXE are required, if /owner=rights identifier does not work with the backup command in the event of a user diskquota exceeded policy. This can be obtained from VMS Engineering. 2.2.18 Unshelve of a shelved file might fail for a SMU COPY'ed file When a file shelved to cache is SMU COPY'ed to a different domain(disk), a cache flush might not update the SMU COPY'ed file's catalog entry with the offline information, as it does for the original file. Instead the catalog entry will remain undisturbed with invalid cache information. This may lead to failure of unshelve operation on the SMU COPY'ed file. A manual intervention to recover the file may be required, if the original file is deleted and its corresponding catalog entry purged. Known Problems and Restrictions 2-17 Known Problems and Restrictions 2.2 Outstanding Known Problems 2.2.19 HSM uses VMS Backup from OpenVMS 7.3-2 onwards To meet up the VMS Backup requirement, the logical SLS$SYSTEM need to be defined. HSM defines this logical in the HSM$STARTUP.COM. The logical will be deassigned if SLS is shutdown. This will result in the error " %BACKUP-F- NOSLS, qualifier /!AS is invalid when SLS is not installed " during the shelving operation. This problem will not be seen on versions prior to OpenVMS 7.3-2 since HSM uses HSM$BACKUP for all the shelving/unshelving operations. To overcome this problem, user should define the SLS$SYSTEM logical to a non-zero value or restart HSM using HSM$STARTUP.COM. 2-18 Known Problems and Restrictions 3 _________________________________________________________________ Installation Notes 3.1 HSM Installation If you are installing HSM V4.3 you will be installing the latest version of HSdriver. If you are upgrading from versions prior to HSM V4.2 to HSM V4.3, you must reboot the system immediately after upgrading in order to run HSM V4.3. ______________________ IMPORTANT ______________________ Do not run HSM V4.3 with HSM V4.0A (BL42) and lower versions of driver Installed, doing so may crash your system ______________________________________________________ The HSM installation process is fully described in the HSM for OpenVMS Installation Guide. This section discusses the OpenVMS CD-ROM distribution, cluster configurations, and magazine loaders, which require special attention. 3.1.1 Installing ABS/MDMS HSM V4.3 can be used with ABS V4.3. HSM and ABS use the same MDMS for media management. This version HSM V4.3, is packaged with MDMS V4.3. 3.1.2 Installing SLS/MDMS 2.9X HSM is compatible with SLS/MDMS V2.9H. Please refer to the HSM V4.3 Installation Guide for more information. This is not applicable if you are running HSM in Basic Mode. On OpenVMS I64 V8.2, there is no support for SLS. Hence HSM043 will work with only MDMS043 on OpenVMS I64 V8.2. No lower versions of MDMS is supported on the I64 V8.2 versions. Installation Notes 3-1 Installation Notes 3.2 Loading Drivers 3.2 Loading Drivers The load command for device drivers is different on VAX and AXP/I64 systems, and is incompletely documented in the Guide to Operations. The proper driver loading commands are: VAX SYSTEMS $ MCR SYSGEN CONNECT HSA0:/NOADAPTER (HSDRIVER) $ MCR SYSGEN CONNECT MKA101:/DRIVER=GKDRIVER/NOADAPTER (SCSI LOADER EXAMPLE) AXP SYSTEMS and I64 SYSTEMS $ MCR SYSMAN IO CONNECT HSA0:/NOADAPTER (HSDRIVER) $ MCR SYSMAN IO CONNECT MKA101:/DRIVER=GKDRIVER/NOADAPTER (SCSI LOADER EXAMPLE) 3.3 Installing on OpenVMS Cluster Configurations If you are installing HSM on multiple system disks of an OpenVMS cluster, you should verify that the logical names HSM$MANAGER, HSM$CATALOG, and HSM$LOG are equivalent on all cluster nodes, and that the device pointed to by these logicals is accessible to all nodes of the cluster running HSM. 3.4 Possible IVP Error on Mixed Clusters It is possible to confuse the OpenVMS Queue Manager in a mixed VMScluster environment (with both VAX and Alpha systems) that are configured with multiple SYSUAF.DAT files. If HSM is not installed on both types of system, the following error may result from an SMU SET SCHEDULE operation: PEP_SCHED_ERROR %RMS-E-RNF, record not found. This problem may be seen during the Installation Verification Procedure (IVP) when installing on either a VAX or Alpha system, if the queue manager is executing on a node that does not have HSM installed. In this environment, HSM should be installed on at least one VAX and one Alpha system, to ensure that each SYSUAF.DAT contains an entry for the account "HSM$SERVER". You must select the same UIC for the HSM$SERVER account in each case for correct 3-2 Installation Notes Installation Notes 3.4 Possible IVP Error on Mixed Clusters operation. You can ignore this IVP error if you are installing the product for the first time on mixed VAX and Alpha clusters. The installation did complete correctly. 3.5 Catalog Split/Merge Details This section describes details of the HSM split/merge process. 3.5.1 Overview An HSM catalog split/merge operation occurs as the result of either of the following SMU commands: o SMU SET SHELF/CATALOG=(catalog_file) o SMU SET VOLUME/SHELF=(shelf_name) In the former case, the split/merge is managed under the specified shelf object and entries for all HSM disk volumes backed by that shelf are copied from the shelf s original catalog file to the target catalog file. In the latter case, the split/merge is managed under the specified volume object, and only entries for that volume are copied from the volumes original shelf catalog file to the target shelf catalog file. 3.5.2 Split/Merge Phases In either type of split/merge, the operation proceeds in two discrete phases, Copy and Delete. During the Copy phase, entries are copied from the original catalog file to the target catalog file. During the Delete phase, entries are deleted from the original catalog file. During the split/merge, the shelf or volume under which the operation is managed will display the state of the operation, either Copy or Delete. Any other shelf or volume object affected by the split/merge will display a Busy state during the operation. Installation Notes 3-3 Installation Notes 3.5 Catalog Split/Merge Details 3.5.3 Cancelation of Split/Merge Operations The split/merge process may take many hours to complete, depending upon the size of the catalogs involved, and the background request load placed upon HSM. A split/merge operation may be safely canceled, while in progress, if necessary. To cancel a split/merge, issue one of the following commands below, specifying the same target shelf or volume object as that of the original SMU command. o SMU SET SHELF/CANCEL o SMU SET VOLUME/CANCEL 3.6 Tape Repack Details This section describes details of the HSM tape repack process. 3.6.1 Overview The HSM tape repack process is initiated by the HSM system administrator, via SMU command. This process is provided to assist the HSM administrator with the following tape maintenance functions: Purging obsolete shelf data - Shelf data corresponding to deleted online files or copies made of earlier versions of a given online file are deleted from the HSM catalog, while valid shelf data is copied to new media. Moving shelf data to alternate media type - Shelf data can be moved to a new media type, using the /TO qualifier. Replacing existing media - Shelf data on an existing media which was lost or destroyed can be recreated from an alternate archive class, using the /FROM qualifier. The tape repack process is controlled by an HSM sub- process. The process consists of two steps: 1. Scan of all HSM catalog files for repack candidate files. This step generates the temporary file HSM$MANAGER:HSM$ARP_CAND.DAT, which contains entries for all shelf data candidate files, and determines which candidate files are eligible for repack to the destination media. 3-4 Installation Notes Installation Notes 3.6 Tape Repack Details 2. Repack of all eligible candidate files, found by the previous step. Each file is repacked via an HSM internal Repack File request, which first restores data from the source archive class, then saves data to the destination archive class. Each request creates a temporary (.RPK) file on HSM$MANAGER, which is deleted when the request completes. 3.6.2 Requirements The following are the minimum resources necessary for HSM tape repack: o Tape drives: Two tape drives, one compatible with the repack source archive class, and one compatible with the repack destination archive class. o Disk space: 20,000 blocks of scratch disk space. The default for this resource is 50,000 disk blocks, residing on the HSM$MANAGER device/directory. This resource can be adjusted using a logical name, described below. o Repack File requests: The tape repack process strives to maintain a constant load of 200 repack file requests on the Shelf Handler process. This resource can be adjusted using a logical name, described below. 3.6.3 Restrictions Only one tape repack process will execute at a given time. Both the source and destination archive classes must be defined for all HSM Shelf objects participating in a tape repack operation. Tape repack data may be written on the same media along with new shelf data. Tape repack requires the same HSM cluster-wide lock to synchronize catalog access as does the split/merge process. Once started, a tape repack operation may prevent a concurrent split/merge operation from completing, even if the split/merge began first. For this reason, it is recommended that tape repacking not be initiated concurrent with split/merge operations. Installation Notes 3-5 Installation Notes 3.6 Tape Repack Details 3.6.4 Logicals The following system-wide logical names may be employed to supplement control of the tape repack function: o HSM$REPACK - This logical can be used to override the default disk scratch area device and directory (HSM$MANAGER). o HSM$ARP_MAX_BLOCKS - This logical can be used to override the default number of scratch disk blocks (50,000). o HSM$ARP_MAX_REQESTS - This logical can be used to override the default number of concurrent Repack File requests (200). o HSM$SHP_REMOTE_AUDIT - Defining this logical will cause each Repack File request to be written to the HSM audit log. o HSM$F_ARP_PRINT_DEBUG - This is a system-wide logical which if defined as a valid file specification, will result in a debug information listing. This file/list can be very useful for trouble shooting REPACK problems. 3.6.5 Recommendations Here are some recommendations to follow, to help ensure successful tape repack operations: o Prior to tape repacking, the HSM administrator should use the SMU LOCATE command to determine the approximate number and location of the files which will be moved by the process. o Prior to tape repacking, check that the appropriate shelf object(s) Archive and Restore lists have been properly set. If these are mis-configured, the tape repack process will eventually fail. o Prior to tape repacking, check that the appropriate shelf object(s) "Save Time" and "Updates Saved" attributes have been set to the desired values. These affect whether deleted files or multiple file updates are propagated to the new media. 3-6 Installation Notes Installation Notes 3.6 Tape Repack Details o Shelve requests issued to HSM during a tape repack will place new data on the media along with repacked data. It may be desireable to disable shelving during this process, when recreating a specific piece of media with the /FROM= qualifier. o Prior to tape repacking, check that the logicals described above have been properly defined. The tape repack process will compete directly with other shelve operations, so try to avoid periods of peak HSM policy- initiated shelving. o Use the SMU SHOW REQUESTS or the SHOW SYSTEM commands to verify that repack sub-process is running. Avoid issuing duplicate repack requests, which will be queued behind any repack process in progress. o Use the /LIST= command qualifier to produce a log of tape repack actions. When the tape repack process completes, analyze the information in both the repack and audit logs to determine whether the source media may deallocated from HSM. o The tape repack process can take many hours complete, and may be canceled, in an emergency, by re-issuing the original repack command with the /CANCEL command qualifier. o Normal completion of a tape repack or shutdown of HSM will delete the temporary (.RPK) files created by this process. However, abnormal termination of a tape repack may leave some of these in HSM$MANAGER /HSM$REPACK. These files are of no use to HSM other than during a tape repack. Installation Notes 3-7