HP Volume Shadowing for OpenVMS: OpenVMS Version 8.4 > Chapter 8 Host-Based Minimerge (HBMM) Guidelines for Establishing HBMM PoliciesEstablishing HBMM policies is likely to be an ongoing process as configurations change and as you learn more about how HBMM works and how it affects various operations on your systems. This section describes a number of considerations to help you determine what policies are appropriate for your configuration. The settings depend on your hardware and software configuration, the computing load, and your operational requirements. These guidelines can assist you when selecting the initial settings for your configuration. As you observe the results in your configuration, you can make further adjustments to suit your computing environment. There are several factors you must consider when selecting the number of master bitmaps to specify in a policy and the systems that host the master bitmaps. The first issue is how many master bitmaps must be used in the configuration. For OpenVMS Version 8.3 and earlier, six is the maximum number of HBMM master bitmaps per shadow set. Starting from OpenVMS Version 8.4, 12 is the maximum per shadow set. The use of each additional master bitmap has a slight impact on write performance and also consumes memory on each system (as described in “Bitmaps: Master and Local”). Using only one master bitmap creates a single point of failure; if the system hosting the master bitmap fails, then this shadow set undergoes a full merge. Therefore, the memory consumption must be weighed against the adverse effects of a full merge. Multiple master bitmaps provide the greatest defense against performing full merges. Another issue when selecting a system to host the master bitmap is the I/O bandwidth of the various systems. Keep in mind that minimerges are always performed on a system that has a master bitmap. Therefore, low-bandwidth systems, such as satellite cluster members, are not good candidates. The disaster tolerance of the configuration is also important in the decision process. Specifying systems to host master bitmaps at multiple sites help ensure that a minimerge is performed if connectivity to an entire site is lost. A two-site configuration must ensure that half the master bitmap systems are at each site, and a three-site configuration must ensure that one third of the master bitmaps are at each of the three sites. When selecting a threshold reset value, you must balance the effects of bitmap resets on I/O performance with the time it takes to perform HBMM minimerges. The goal is to set the reset value as low as possible (thus decreasing merge times) while not affecting application I/O performance. Too low a value degrades I/O performance. Too high a value causes merges to take extra time. HBMM bitmaps keep track of writes to a shadow set. The more bits that are set in the bitmap, the greater the amount of merging that is required in the event of a minimerge. HBMM clears the bitmap (after ensuring that all outstanding writes have completed so that the members are consistent) when certain conditions are met (see the description of SHADOW_HBMM_RTC in “Volume Shadowing Parameters ”). A freshly cleared bitmap, with few bits set, performs a minimerge faster. The bitmap reset, however, can affect I/O performance. Before a bitmap reset can occur, all write I/O to the shadow set must be paused and any write I/O that is in progress must be completed. Then the bitmap is cleared. This is performed on all systems on a per shadow set basis. Therefore, avoid a reset threshold setting that causes frequent resets. You can view the number of resets performed by using the SHOW SHADOW command. The HBMM Reset Count appears in the last section of the display, as shown in the following example:
Writes that need to set bits in the bitmap are slightly slower than writes to areas that are already marked as having been written. Therefore, if many of the writes to a particular shadow set are concentrated in certain “hot” files, the reset threshold must be made large enough so that the same bits are not constantly set and then cleared. On the other hand, if the reset threshold is too large, then the advantages of HBMM are reduced. For example, if 50% of the bitmap is populated (that is, 50% of the shadow set has been written to since the last reset), the HBMM merge takes approximately 50% of the time of a full merge. You can change the RESET_THRESHOLD value while a policy is in effect by specifying the same policy with a different RESET_THRESHOLD value. (The syntax for specifying a policy, including the RESET_THRESHOLD keyword, is described in “HBMM Policy Specification Syntax”.) The following example shows how to:
HBMM policies are defined to implement decisions regarding master bitmaps. Some sites might find that a single policy can effectively implement the decisions. Other sites might require greater granularity and therefore implement multiple policies. Multiple policies are required when the cluster includes enough high-bandwidth systems that you want to ensure that the merge load is spread out. Note that minimerges occur only on systems that host a master bitmap. Therefore, if 12 systems with high bandwidth are set up to perform minimerge or merge operations (the system parameter SHADOW_MAX_COPY is greater than zero on all systems), you must ensure that the master bitmaps are spread out among these high-bandwidth systems. Multiple HBMM policies are also useful when shadow sets need different bitmap reset thresholds. The list of master bitmap systems can be the same for each policy, but the threshold can differ. |