Previous | Contents | Index |
This chapter describes how to design a storage subsystem. The design process involves the following steps:
The rest of this chapter contains sections that explain these steps in
detail.
5.1 Understanding Storage Product Choices
In an OpenVMS Cluster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI-2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:
Consider the following criteria when choosing storage devices:
One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.
In an OpenVMS Cluster system, the following storage devices and adapters can be connected to OpenVMS Cluster interconnects:
Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.
Storage Interconnect | Storage Devices |
---|---|
SCSI | HSZ controllers and SCSI storage |
Fibre Channel | HSG and HSV controllers and SCSI storage |
SAS | LSI 1068 and LSI Logic 1068e controllers and SCSI storage |
If the cost of floor space is high and you want to minimize the floor space used for storage devices, consider these options:
Storage capacity is the amount of space needed on storage devices to
hold system, application, and user files. Knowing your storage capacity
can help you to determine the amount of storage needed for your OpenVMS
Cluster configuration.
5.2.1 Estimating Disk Capacity Requirements
To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.
Software Component | Description |
---|---|
OpenVMS operating system |
Estimate the number of blocks
1 required by the OpenVMS operating system.
Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information. |
Page, swap, and dump files |
Use AUTOGEN to determine the amount of disk space required for page,
swap, and dump files.
Reference: The OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes. |
Site-specific utilities and data | Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files. |
Application programs |
Estimate the space required for each application to be installed on
your OpenVMS Cluster system, using information from the application
suppliers.
Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use. |
User-written programs | Estimate the space required for user-written programs and their associated databases. |
Databases | Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database. |
User data |
Estimate user disk-space requirements according to these guidelines:
|
Total requirements | The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration. |
Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.
For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.
To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.
Planning for adequate backup storage capacity can make archiving
procedures more effective and reduce the capacity requirements for
online storage.
5.3 Choosing Disk Performance Optimizers
Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.
You can use the Monitor utility and DECamds to help you determine which
performance optimizer best meets your application and business needs.
5.3.1 Performance Optimizers
Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.
Optimizer | Description |
---|---|
DECram for OpenVMS | A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1 |
Solid-state disks | In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data. |
Disk striping |
Disk striping (RAID level 0) lets applications access an array of disk
drives in parallel for higher throughput. Disk striping works by
grouping several disks into a "stripe set" and then dividing
the application data into "chunks" that are spread equally
across the disks in the stripe set in a round-robin fashion.
By reducing access time, disk striping can improve performance, especially if the application:
Two independent types of disk striping are available:
Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets, and you can shadow host-based disk stripe sets. |
Extended file cache (XFC) | OpenVMS Alpha supports host-based caching with extended file cache (XFC), which can replace and can coexist with virtual I/O cache (VIOC). XFC is a clusterwide, file-system data cache that offers several features not available with VIOC, including read-ahead caching and automatic resizing of the cache to improve performance. OpenVMS Integrity servers also supports XFC but does not support VIOC. |
Controllers with disk cache | Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSZ and HSG controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller. |
Reference: See Section 9.5 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.
5.4 Determining Disk Availability Requirements
For storage subsystems, availability is determined by the availability
of the storage device as well as the availability of the path to the
device.
5.4.1 Availability Requirements
Some costs are associated with optimizing your storage subsystems for higher availability. Part of analyzing availability costs is weighing the cost of protecting data against the cost of unavailable data during failures. Depending on the nature of your business, the impact of storage subsystem failures may be low, moderate, or high.
Device and data availability options reduce and sometimes negate the
impact of storage subsystem failures.
5.4.2 Device and Data Availability Optimizers
Depending on your availability requirements, choose among the availability optimizers described in Table 5-4 for applications and data with the greatest need.
Availability Optimizer | Description |
---|---|
Redundant access paths | Protect against hardware failures along the path to the device by configuring redundant access paths to the data. |
Volume Shadowing for OpenVMS software |
Replicates data written to a virtual disk by writing the data to one or
more physically identical disks that form a shadow set. With replicated
data, users can access data even when one disk becomes unavailable. If
one shadow set member fails, the shadowing software removes the drive
from the shadow set, and processing continues with the remaining
drives. Shadowing is transparent to applications and allows data
storage and delivery during media, disk, controller, and interconnect
failure.
A shadow set can contain up to three members, and shadow set members can be anywhere within the storage subsystem of an OpenVMS Cluster system. Reference: See HP Volume Shadowing for OpenVMS for more information about volume shadowing. |
System disk redundancy |
Place system files judiciously on disk drives with multiple access
paths. OpenVMS Cluster availability increases when you form a shadow
set that includes the system disk. You can also configure an OpenVMS
Cluster system with multiple system disks.
Reference: For more information, see Section 10.2. |
Database redundancy | Keep redundant copies of certain files or partitions of databases that are, for example, updated overnight by batch jobs. Rather than using shadow sets, which maintain a complete copy of the entire disk, it might be sufficient to maintain a backup copy on another disk or even on a standby tape of selected files or databases. |
Newer devices | Protect against failure by choosing newer devices. Typically, newer devices provide improved reliability and mean time between failures (MTBF). Newer controllers also improve reliability by employing updated chip technologies. |
Comprehensive backup strategies |
Frequent and regular backups are the most effective way to ensure the
availability of your data.
Reference: For information about Fibre Channel tape support, see Section 7.5. For information about backup strategies and OpenVMS Backup, refer to the OpenVMS System Manager's Manual. For information about additional backup software and solutions, visit: http://h18006.www1.hp.com/storage/tapestorage.html and http://h71000.www7.hp.com/openvms/storage.html. |
SAS is a point-to-point architecture that transfers data to and from
SCSI storage devices by using serial communication (one bit at a time).
5.5.1 Storage Devices
Dual-domain SAS creates an additional domain to address the single SAS domain pathway failure. The additional domain uses an open port on an HP Smart Array controller that is capable of supporting dual-domain SAS. The second port on the dual-domain capable Smart Array controller generates a unique identifier and can support its own domain.
The following SAS controllers are supported:
The following SMART Arrays, that are supported have a SAS backplane but cannot be considered as SAS HBA:
There are no external controllers supported on SAS, you can only connect JABODs such as MSA60/70 and internal disks to SAS HBA. However, P700m can be connected to MSA2000SA (SAS version of MSA2000).
5.6 SCSI-Based Storage
The Small Computer Systems Interface (SCSI) bus is a storage
interconnect based on an ANSI industry standard. You can connect up to
a total of 8 or 16 nodes (3 of which can be CPUs) to the SCSI bus.
5.6.1 Supported Devices
The following devices can connect to a single host or multihost SCSI bus:
The following devices can connect only to a single host SCSI bus:
The Fibre Channel interconnect is a storage interconnect that is based
on an ANSI industry standard.
5.7.1 Storage Devices
The HSG and HSV storage controllers can connect to a single host or to
a multihost Fibre Channel interconnect. For more information about
Fibre Channel hardware support, see Section 7.2.
5.8 Host-Based Storage
Host-based storage devices can be connected locally to OpenVMS Cluster member systems using local adapters. You can make this locally connected storage available to other OpenVMS Cluster members by configuring a node as an MSCP server.
You can use local adapters to connect each disk to two access paths
(dual ports). Dual porting allows automatic failover of disks between
nodes.
5.8.1 Internal Buses
Locally connected storage devices attached to a system's internal bus.
For more information about the buses supported, see the HP OpenVMS
I/O User's Reference Manual.
5.8.2 Local Adapters
Following is a list of local adapters and their bus types:
For the list of supported internal buses and local adapters, see the Software Product Description.
Previous | Next | Contents | Index |