Hi Managers,
sorry this was my last action before one week of holidays.
The SUMMARY is now a little bit too late ;-)
The solution is:
- The limit is ONE clone for each fileset
After changing my nameconventions I forgot to delete a lot of the clones,
and so I got the errors (like: max number of fset reached).
It was My error! Sorry again.
Thanks for their answers to:
Joe Ledesma
Tom Brand
alan_at_nabeth.cxo.dec.com
Viktor Holmberg
Cy Dewhurst
Morgan Rogers
The original text was:
>
> I'm trying to write a script to speed up my NetWorker-Backup
> with a minimum of offline-time for the application(SAP) and
> the database (Oracle).
> The idea is, to make the following steps:
>
> 1. shutdown appl./db
> 2. clone all filesets (more than 10)
> for each clone do:
> 1. Mount the clone
> 2. saving, umounting and remove the clone in the background:
> (save -s $STORAGE_SERVER -b Default -e "2 Weeks" $MNTPT;
> umount $MNTPT;
> rmfset -f $DOMAINE ${FSET}_clone) &
> The reason for this is, we have a library with 4 drives (TL894)
> and I want give NetWorker the change, to use his capability
> to optimize the backup-process.
> 3. staring up appl./db (while the saving is running!)
>
> The Problem is it seems, that the number of clones at one time
> is limited to 6.
>
> Have anyone the same experience and how can I change this limit ?
>
> Thx in advance
fp$
What does a Frank-Peter Reich
process need Tru64-Second-Level-Support
to become a & Perl Monger
daemon ? Nowis Oldenburg
|||
\|/ . ,
| ( v )
|\/ |
| _/ \_
..a fork() e-mail: freich_at_nowis.de
Received on Mon Oct 23 2000 - 16:06:38 NZDT