-- Hank Lee <hank.lee_at_vta.org> UNIX Systems Administrator SAP Basis Administrator Santa Clara Valley Transportation Authority ================================================================== ===Viktor.holmberg_at_abnamro.co.uk================================== No. You haven't missed anything. LSM will perform a physical copy from disk1 to disk2 when it mirrors or resync the mirrors. I haven't tried to grow a volume. The only option you have is to create a number of smaller mirrored volumes then stripe them. Of course this defeats the HSZ70 striping. If I was in your shoes I would create the new volume with the system up, wait until its finished, then add it to ASE. I think you are being optimistic with 6 hours. Your main bottleneck is the FWD scsi bus and the KZPSA controller. It will be even slower if the system is loaded. If you are using ASE 1.5 you don't need to shutdown the service to update the LSM configuration. Let me know if I can offer any other help. Viktor ================================================================== ===alan_at_nabeth.cxo.dec.com======================================== If the HP mirroring is so far for a large amount of data, it seems clear that they aren't copying the data before they claim the operation is complete. There are techniques to keep track of data has and hasn't been mirrored and let the startup go very quickly and the actual copying take place in background. But, you have realize that your data isn't protected until it has all been copied. This method is useful for a new file system, because most of the data is considered free space and doesn't need to be mirrored. Copying 56 GB of data in 6 hours is around 2.6 MB/sec which isn't bad, but isn't great. There might be parameters that control the copy size either in the command to make the mirror or in some other command. With there being so many LSM command and so many options, going through the manual pages and guide to using is about the only way to find these things. ================================================================== ===Alan Davis=============================================== Time to create mirrors depends on several factors. Size of the volume to be mirrored Number of plexes being added Speed of disks Layout of volume's subdisks (contiguous vs. subdisks scattered across disks) Layout of volumes plexes across disks/controllers Speed of the buses involved Amount of disk activity on the volume during the mirror creation Six hours doesn't sound unreasonable for a 54gb populated volume to mirror. Note that you can continue to access the volume if you are doing the mirror operation with the service online, but it will slow access and completion time a certain amount. You are correct that a mirror operation or resynch will be interrupted when a service is failed over, stopped or started. The first thing that happens when you start or failover a service is the stop scripts are run to ensure the start script begins at a known configuration. ================================================================== ===jrobens_at_davidjones.com.au====================================== We have 64GB mirrors, and have to confirm your results that it can take up to 3hours for us to create a single mirror depending on how busy the system is (about 1.5days to do all of them - making breaking the mirrors inappropriate for backups). Apparently a block copy of the entire disk has to be performed when mirrors are created. We are using clones for backups instead. We have not seen the problem you describe with TruCluster (and we do have one). It might just be due to luck, as we don't rebuild the mirrors regularly, and from memory, we have concentrated on starting the cluster after a failure, rather than trying to re-create the mirrors. ================================================================== ===Original posting=============================================== We are about to schedule some down time on a production ASE cluster to add some additional mirrored LSM volumes to an existing service. The volumes are approx. 54gb (HSZ70 stripe sets). The mirror sets are under LSM control and mirrored across controllers. I have done a few tests with creating and mirroring the volumes (on 200000 block volumes) just to get some timings I can use to extrapolate to the 54gb volumes. I get some interesting (disappointing) results. Let me explain: #volassist -g tony -U fsgen make testvol 200000 mirror=yes rz14 rz30 Average time=44 seconds. #volassist -g tony -U fsgen make testvol 200000 mirror=no rz14 #volassist -g tony mirror testvol rz30 Average time 39 seconds - 88% quicker than creating the volume with two mirrors. #volassist -g tony -U fsgen make testvol 200 mirror=no rz14 #volassist mirror testvol rz30 #volassist -g tony -f growto testvol 200000 Average time 43 seconds - virtually identical to creating the volume with two mirrors. Using the second example, this extrapolates roughly to a little under 6 hours to mirror up! A bit longer than I had hoped for. I was disappointed that example 3 did not mirror up much faster. Under HP-UX/LVM, creating a volume small, then adding a mirror (with the volume small) then extending the mirrored logical volume to its required size is virtually instantaneous. Does anybody have any ideas how I can speed up the mirror process (apart from doing it the day before!). Am I missing something here? The volumes cant be in the process of mirroring (through unfortunate experience) when they are added to the ASE service. The service restart times out as the disk group fails to export due to uncompleted LSM activity. You either have to add the volumes (no mirrors) to the service and add the disks which will contain the mirror plexes to the service. Allow the service to restart then add the mirrors (what we ended up doing). Or, mirror up the volumes and wait for the mirroring to complete before adding them to the service.Received on Fri Feb 05 1999 - 11:08:25 NZDT
This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:38 NZDT