OpenVMS System Manager's Manual
18.5.4 Disabling Read-Ahead Caching
XFC uses a technique called read-ahead caching to
improve the performance of applications that read data sequentially. It
detects when a file is being read sequentially in equal-sized I/Os, and
fetches data ahead of the current read, so that the next read
instruction can be satisfied from cache.
To disable read-ahead caching on the local node, set the dynamic system
parameter VCC_READAHEAD to 0. By default, this parameter is 1, which
allows the local node to use read-ahead caching.
Example
This example disables read-ahead caching on the local node:
$ RUN SYS$SYSTEM:SYSGEN
SYSGEN> USE CURRENT
SYSGEN> SET VCC_READAHEAD 0
SYSGEN> WRITE ACTIVE
|
This series of commands affects volumes currently mounted on the local
node, as well as volumes mounted in the future. Once you enter these
commands, read-ahead caching is not used on the local node.
18.5.5 Monitoring Performance
XFC provides more information than VIOC. For example, you can obtain
information on system-wide, volume-wide, or even a per-file basis. Disk
I/O response times are also available. See the OpenVMS DCL Dictionary: N--Z for a
description of the SHOW MEMORY command.
18.5.5.1 System-Wide Statistics
Use SHOW MEMORY /CACHE to monitor the overall system performance of
XFC. For example:
$ SHOW MEMORY /CACHE
System Memory Resources on 26-JAN-2001 15:58:18.71
Extended File Cache (Time of last reset: 24-JAN-2001 15:03:39.05)
Allocated (Mbytes) (1) 3000.00 Maximum size (Mbytes) (11) 5120.00
Free (Mbytes) (2) 2912.30 Minimum size (Mbytes) (12) 3000.00
In use (Mbytes) (3) 87.69 Percentage Read I/Os (13) 98%
Read hit rate (4) 92% Write hit rate (14) 0%
Read I/O count (5) 178136 Write I/O count (15) 1867
Read hit count (6) 165470 Write hit count (16) 0
Reads bypassing cache (7) 2802 Writes bypassing cache (17) 39
Files cached open (8) 392 Files cached closed (18) 384
Vols in Full XFC mode (9) 0 Vols in VIOC Compatible mode (19) 4
Vols in No Caching mode (10) 1 Vols in Perm. No Caching mode (20) 0
|
(1) Allocated
|
The amount of memory currently allocated to the cache.
|
(2) Free
|
The amount of memory currently allocated to the cache that is not being
used.
|
(3) In Use
|
The amount of memory currently allocated to the cache that is being
used. This is the difference between the Allocated value and the Free
value.
|
(4) Read hit rate
|
The ratio of the Read hit count field divided by the Read I/O count
field.
|
(5) Read I/O count
|
The total number of read I/Os seen by the cache since system startup.
|
(6) Read hit count
|
The total number of read hits since system startup. A read hit is a
read I/O that did not require a physical I/O to disk because the data
was found in the cache.
|
(7) Reads bypassing cache
|
The total number of read I/Os since system startup that were seen by
the cache but were not cached, for example, because they were too big,
or they were for volumes mounted /NOCACHE, or they specified one of the
following QIO modifiers: IO$M_DATACHECK, IO$M_INHRETRY, or
IO$M_NOVCACHE.
|
(8) Files cached open
|
The number of open files currently being cached.
|
(9) Volumes in Full XFC mode
|
This is 0 if caching is disabled either on the local node or on another
node in the OpenVMS Cluster.
If caching is
not disabled on any node, this is the number of volumes that
are mounted on the local node and that satisfy both of these criteria:
- The volume is not mounted /NOCACHE on the local node or any other
node in the OpenVMS Cluster.
- The volume is mounted only on nodes that are using XFC.
|
(10) Volumes in No Caching mode
|
If caching is disabled on the local node or on another node in the
OpenVMS Cluster, this is the number of volumes that are currently
mounted on the local node. Otherwise it is zero.
|
(11) Maximum size
|
The maximum size that the cache could ever grow to.
|
(12) Minimum size
|
The minimum size that the cache could ever shrink to. This is
controlled by the value of the VCC$MIN_CACHE_SIZE entry in the reserved
memory registry.
|
(13) Percentage Read I/Os
|
Percentage of I/Os that are reads.
|
(14) Write hit rate
|
This field is reserved for future use.
|
(15) Write I/O count
|
The total number of write I/Os seen by the cache since system startup.
|
(16) Write hit count
|
This field is reserved for future use.
|
(17) Writes bypassing cache
|
The total number of write I/Os since system startup that were seen by
the cache but were not cached, for example, because they were too big,
or they were for volumes mounted /NOCACHE, or they specified one of the
following QIO modifiers: IO$M_DATACHECK, IO$M_ERASE, IO$M_INHRETRY, or
IO$M_NOVCACHE.
|
(18) Files cached closed
|
The number of closed files that still have valid data in the cache.
|
(19) Volumes in VIOC compatible mode
|
This is 0 if caching is disabled either on the local node or on another
node in the OpenVMS Cluster.
If caching is
not disabled on any node, this is the number of volumes that
are mounted on the local node and that satisfy either of these criteria:
- The volume is mounted /NOCACHE either on the local node or on
another node in the OpenVMS Cluster.
- The volume is mounted on a node in the OpenVMS Cluster that is
using VIOC.
The files in these volumes can't be cached when they are being
shared for writing in an OpenVMS Cluster.
|
(20) Vols in Perm. No Caching mode
|
This field should be zero. If nonzero, XFC has detected an illegal
write operation to this device and has disabled caching to this device.
|
See the OpenVMS DCL Dictionary: N--Z for a description of the SHOW MEMORY command.
18.5.6 Using XFC in a Mixed Architecture OpenVMS Cluster
In an OpenVMS Cluster, some nodes can use XFC and other nodes can use
VIOC. This allows mixed architecture clusters to benefit from XFC.
When a volume is mounted on a node that is using VIOC, the nodes using
XFC cannot cache any files in the volume that are shared for writing. A
file that is shared for writing is one that is being
accessed by more than one node in an OpenVMS Cluster, and at least one
of those nodes opened it for write access.
18.6 Managing the Virtual I/O Cache
This section describes how to manage VIOC. It describes the following
tasks:
The virtual I/O cache is a clusterwide, write-through, file-oriented,
disk cache that can reduce the number of disk I/O operations and
increase performance. The purpose of the virtual I/O cache is to
increase system throughput by reducing file I/O response times with
minimum overhead. The virtual I/O cache operates transparently of
system management and application software, and maintains system
reliability while it significantly improves virtual disk I/O read
performance.
18.6.1 Understanding How the Cache Works
The virtual I/O cache can store data files and image files. For
example, ODS-2 disk file data blocks are copied to the virtual I/O
cache the first time they are accessed. Any subsequent read requests of
the same data blocks are satisfied from the virtual I/O cache (hits)
eliminating any physical disk I/O operations (misses) that would have
occurred.
Depending on your system work load, you should see increased
application throughput, increased interactive responsiveness, and
reduced I/O load.
Note
Applications that initiate single read and write requests do not
benefit from virtual I/O caching as the data is never reread from the
cache. Applications that rely on implicit I/O delays might abort or
yield unpredictable results.
|
Several policies govern how the cache manipulates data, as follows:
- Write-through---All write I/O requests are written to the cache as
well as to the disk.
- Least Recently Used (LRU)---If the cache is full, the least
recently used data in the cache is replaced.
- Cached data maintained across file close---Data remains in the
cache after a file is closed.
- Allocate on read and write requests---Cache blocks are allocated
for read and write requests.
18.6.2 Selecting VIOC on an Alpha System
If for some reason, you want an Alpha system to use VIOC instead of
XFC, follow these steps:
- Remove the entry for VCC$MIN_CACHE_SIZE from the reserved memory
registry, using the Sysman utility's RESERVED_MEMORY REMOVE command:
$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> RESERVED_MEMORY REMOVE VCC$MIN_CACHE_SIZE /NOGLOBAL_SECTION
|
This makes sure that no memory is allocated to XFC in Step 4, when
the system reboots with VIOC.
- Set the VCC_FLAGS system parameter to 1.
- Run AUTOGEN to ensure that other system parameters allow for the
new value. This is not essential, but it is advisable.
- Reboot the system. VIOC is automatically loaded during startup
instead of XFC, because VCC_FLAGS is 1.
If you forgot to remove the VCC$MIN_CACHE_SIZE entry from the reserved
memory registry in Step 1, memory is allocated to XFC even though XFC
is not loaded. Nothing can use this memory. If this happens, use the
Sysman utility's RESERVED_MEMORY FREE command to release this memory:
$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> RESERVED_MEMORY FREE VCC$MIN_CACHE_SIZE /NOGLOBAL_SECTION
|
18.6.3 Controlling the Size of the Cache
The way that you control the size of VIOC depends on whether you have
an OpenVMS Alpha or OpenVMS VAX system.
OpenVMS Alpha
On OpenVMS Alpha systems, the size of VIOC is fixed at system startup
time. The cache can't shrink or grow. The value of the static system
parameter VCC_MAXSIZE specifies the size of the cache in blocks. By
default it is 6400 blocks (3.2MB).
To change the size of VIOC on an OpenVMS Alpha system, follow these
steps:
- Set the VCC_MAXSIZE system parameter to the required value.
- Run AUTOGEN to ensure that other system parameters allow for the
new value. This is not essential, but it is advisable.
- Reboot the system to make the new value effective.
OpenVMS VAX
On OpenVMS VAX systems, you can use the static system parameter
VCC_PTES to specify the maximum size of VIOC. This parameter specifies
the size in pages. By default it is 2,000,000,000.
VIOC automatically shrinks and grows, depending on your I/O workload
and how much spare memory is available on your system. As your I/O
workload increases, the cache automatically grows, but never to more
than the maximum size. And when your applications need memory, the
cache automatically shrinks.
To change the maximum size of VIOC on an OpenVMS VAX system, follow
these steps:
- Set the VCC_MAXSIZE system parameter to the required value.
- Run AUTOGEN to ensure that other system parameters allow for the
new value. This is not essential, but it is advisable.
- Reboot the system to make the new value effective.
18.6.4 Displaying VIOC Statistics
Use the DCL command SHOW MEMORY/CACHE/FULL to display statistics about
the virtual I/O cache, as shown in the following example:
$ SHOW MEMORY/CACHE/FULL
System Memory Resources on 10-OCT-1994 18:36:12.79
Virtual I/O Cache
Total Size (pages) (1) 2422 Read IO Count (6) 9577
Free Pages (2) 18 Read Hit Count (7) 5651
Pages in Use (3) 2404 Read Hit Rate (8) 59%
Maximum Size (SPTEs) (4) 11432 Write IO Count (9) 2743
Files Retained (5) 99 IO Bypassing the Cache (10) 88
|
Note
This example shows the output for the SHOW MEMORY/CACHE/FULL command on
a VAX system. The SHOW MEMORY/CACHE/FULL command displays slightly
different fields on an Alpha system.
|
(1) Total Size
|
Displays the total number of system memory pages that VIOC currently
controls.
|
(2) Free Pages
|
Displays the number of pages controlled by VIOC that do not contain
cache data.
|
(3) Pages in Use
|
Displays the number of pages controlled by VIOC that contain valid
cached data.
|
(4) Maximum Size
|
Shows the maximum size that the cache could ever grow to.
|
(5) Files Retained
|
Displays the number of files that are closed but the file system
control information is being retained because they have valid data
residing in the cache.
|
(6) Read I/O Count
|
Displays the total number of read I/Os that have been seen by VIOC
since the last system.
|
(7) Read Hit Count
|
Displays the total number of read I/Os that did not do a physical I/O
because the data for them was found in the cache since the last system
BOOT.
|
(8) Read Hit Rate
|
Displays the read hit count and read I/O count ratio.
|
(9) Write I/O Count
|
Shows the total number of write I/Os that have been seen by the cache
since the last system BOOT.
|
(10) I/O Bypassing
|
Displays the count of I/Os that for some reason did not attempt to
satisfy the request/update by the cache.
|
18.6.5 Enabling VIOC
By default, virtual I/O caching is enabled. Use the following system
parameters to enable or disable caching. Change the value of the
parameters in MODPARAMS.DAT, as follows:
Parameter |
Enabled |
Disabled |
VCC_FLAGS (Alpha)
|
1
|
0
|
VBN_CACHE_S (VAX)
|
1
|
0
|
Once you have updated MODPARAMS.DAT to change the value of the
appropriate parameter, you must run AUTOGEN and reboot the node or
nodes on which you have enabled or disabled caching. Caching is
automatically enabled or disabled during system initialization. No
further user action is required.
18.6.6 Determining If VIOC Is Enabled
SHOW MEMORY/CACHE indicates whether VIOC caching is on or off on a
running system. (This is a lot easier than using SYSGEN.)
SYSGEN can be used to examine parameters before a system is booted. For
example, you can check the system parameter VCC_FLAGS (on Alpha) or
VBN_CACHE_S (on VAX) to see if virtual I/O caching is enabled by using
SYSGEN, as shown in the following Alpha example:
$ RUN SYS$SYSTEM:SYSGEN
SYSGEN> SHOW VCC_FLAGS
|
A value of 0 indicates that caching is disabled; the value 1 indicates
caching is enabled.
18.6.7 Memory Allocation and VIOC
The memory allocated to caching is determined by the size of the
free-page list. The size of the virtual I/O cache can grow if one of
the following conditions is true:
- If the amount of available free memory is twice the value of
FREEGOAL and if proactive memory reclamation is enabled for
periodically waking processes.
- If the amount of available free memory is equal to the value of
FREEGOAL and if proactive memory reclamation is enabled for
long-waiting processes.
- If the amount of available free memory is greater than GROWLIM and
if proactive memory reclamation is not enabled.
The cache size is also limited by the following:
- The number of system page table entries (SPTE) that are available.
This number is a calculated value determined at boot time.
- The demands of the memory management subsystem. The memory
management subsystem has a direct interface to cache so that, when
necessary, it can demand that the cache return space to it.
How is memory reclaimed from the cache? The swapper can reclaim memory
allocated to the virtual I/O cache by using first-level trimming. In
addition, a heuristic primitive shrinks the cache returning memory in
small increments.
18.6.8 Adjusting VIOC Size
The size of the virtual I/O cache is controlled by the system parameter
VCC_MAXSIZE. The amount of memory specified by this parameter is
statically allocated at system initialization and remains owned by the
virtual I/O cache.
To increase or decrease the size of the cache, modify VCC_MAXSIZE and
reboot the system.
18.6.9 VIOC and OpenVMS Cluster Configurations
The cache works on all supported configurations from single-node
systems to large mixed-interconnect OpenVMS Cluster systems. The
virtual I/O cache is nodal; that is, the cache is local to each OpenVMS
Cluster member. Any base system can support virtual I/O caching; an
OpenVMS Cluster license is not required to use the caching feature.
Note
If any member of an OpenVMS Cluster does not have caching enabled, then
no caching can occur on any node in the OpenVMS Cluster (including the
nodes that have caching enabled). This condition remains in effect
until the node or nodes that have caching disabled either enable
caching or leave the cluster.
|
The lock manager controls cache coherency. The cache is flushed when a
node leaves the OpenVMS Cluster. Files opened on two or more nodes with
write access on one or more nodes are not cached.
|