--- There should not be any size limitation. 200GB is not that large. I do not know whether there are any know problems with rmvol that would explain the high CPU use; you might be hitting a bug. Since V4.0E is no longer a supported release, I suspect you can not even get the patch kit to check the bug list, and that won't help you if you are in the middle of the process anyway. --- I did this successfully on a domain which had a total of over 200GB. The individual RAID sets which I was adding and removing were 40 GB and 20 GB respectively. I never saw the behaviour you are describing. --- I'm guessing with that large domain, you probably have a very large frag file. This file, which keeps the very small file fragments, can exist on any or all of the volumes in the domain. So, with a very large number of small files it can get quite big. If it is large, it could take a long time to migrate the extents that reside on from volume A to the other volumes. This file has a tag number of one and can be examined using the command "showfile -x /<mount-pt>/.tags/1". --- I have recently migrated 2Tb of data using addvol/rmvol on a GS140 (AS8400) clustered pair. My advfs domains are 100Gb each. Some of the files within these advfs domains are over 60Gb each. Things I noticed with this : It takes a long time (depending upon how your disk attached). It does consume a large amount of cpu. It seems to run in stages (I think it's at a file level). We us cpuinfo to monitor our systems, and at certain stages all 10 cpus were in I/O ! Of course, I don't need to say 'Backup your data beforehand' do I ? One word of caution. We run ASE cluster - one of our services is an NFS filesystem. Migrating this caused HUGE problems. -resulting in a reboot ! I'm not sure if it was NFS or ASE that caused the problem..... but be aware. --- I do not have the direct answer for you, but can add insight. There is a special metadata area on the disk which does get starved during large advfs utilities data operations. I too have this issue, and my senior had a procedure for increasing the "I think he called this the bmj area", but he has left with his procedure. Although not a direct answer, you are looking for a procedure to increase your metadata area, so large file operation will succeed. hope this helps --- If the volume being removed has lots of files, it may take considerable time to clean up the metadata being changed. In V4 and earlier, some of the algorithms used become very CPU intensive when there are lots of files. I don't recall if this an absolute "lots of files" or "lots of files in a single directory" sort of problem. I don't think this a limit of due to large volume, but simply a consequence of a large volume. --- ----- Original Message ----- From: "Carlos Chua" <chuacarlos_at_hotmail.com> To: <tru64-unix-managers_at_ornl.gov> Sent: Saturday, January 20, 2001 7:26 AM Subject: rmvol hangs when volume too large? > Hi, > > I'm currently using the addvol/rmvol command to migrate a disk that contains > 200G of data. My environment is V4.0E. I'm using the showfdmn command to > check the status of the file migration. It seems to me that whenever the > rmvol process is about to finish, the rmvol process would start taking up > 99.9% CPU time. Now the process has been stuck for more than an hour, I'm > afraid the server might crash anytime soon as has happened last week. > > My question is, as I was able to do the same process on a smaller size > domain, but only have problem with a bigger size domain, does it mean there > is a size limitation when using the rmvol command? > > Any help greatly appreciated. > > Regards, > > chuacarlos_at_hotmail.com > > _________________________________________________________________________ > Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. > >Received on Tue Jan 23 2001 - 14:53:01 NZDT
This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:41 NZDT