It looks as though I don't have too much to worry about. I was working on the
idea that I should get greater than 2MB/s(since the DLT is rated at 3MB/s)
with compression.
In any case going from a 6-7h backup(DAT) to less than 2 is great!!
Thanks to all who replied;
--------------------------------------------------------------------------
>From alan_at_nabeth.cxo.dec.com Mon Aug 4 15:58:38 1997
>We have begun using an external Quantum DLT 4000 on out AS2100(du 3.2g),
>and dump doesn't seem to be able to figure out what it needs, eg
Dump doesn't *need* anything. It writes till it gets to EOT and
then prompts for the next tape. The "finished in" and estimated
tape blocks are strictly for your benefit. The dump manual page
has length and density information for many of the supported
drives which will get better estimates. Knowing that the native
capacity of the DLT 4000 with CompacTape IV is 20 GB could allow
deriving a better estimate by extrapolating from one CompacTape
III drives and feeding dump those numbers.
The additional problem of making the estimates, is that the
actual amount of tape used depends on the compression ratio
that the drive achieves with a given data set. Unless you
know how well you data compresses you can't accounting for
this in the length or density and dump doesn't have a clue.
>
>dump: Dumping from host redgum.bendigo.latrobe.edu.au
>dump: Date of this level 0 dump: Sun Aug 03 16:10:56 1997 EST
>dump: Date of last level 0 dump: the start of the epoch
>dump: Dumping /dev/rre0g (/usr) to /dev/rmt0h
>dump: Mapping (Pass I) [regular files]
>dump: Mapping (Pass II) [directories]
>dump: Estimate: 10683239 tape blocks on 248.41 volume(s)
Odd that it is using 1 KB tape blocks. Could be a bug, internal
mystery conversion, or simply not using the block size you think
it is. Such a small block size would probably seriously affect
the performance, so you probably are using 64 KB.
>Despite the grunged numbers the dump seemed to work ok, but it took
>1hr 50mins to dump an 11GB partition, which is 30-40mins longer
>than it should take.
What are you basing the expected time on? The performance of
a tape, especially a fast is often limited by the speed of a
the input device. In the case of file system backup, the
backup program and file mix is often the limiting factor.
Your 11 GB in 110 minutes is quite good (1747 KB/sec). If
the file system is large files and they compress well you
might get faster, but that fast isn't shabby for the DLT 4000.
You can get a clue what the best case might be by dumping to
stdout and then redirecting to /dev/null.
>Are there any customisations of the kernel, or dump parameters
>that would help me out? Is the fact that its a Quantum, rather than a
>tz88 a problem?
It is conceivable that the device recognition entry for the
TZ88 has to explicitly enable the write cache on the drive
and the lack of an entry for the DLT 4000 doesn't. But if
that were true, you probably wouldn't get close what the
performance you did get. You might get a little better by
having the DLT 4000 on a different bus than the drives. Making
sure the adapter for the DLT 4000 supports Fast transfers
could also help a bit, but reading off disk faster is what
will help the most.
--------------------------------------------------------------------------
>From mbertone_at_gtech.com Mon Aug 4 21:41:06 1997
Brian,
the length of time it takes depends A LOT on the number and size of
the file in the 11 gig file system, If you have 300,000 small files,
it will definitely take a lot longer than a 11 gig file system with
11 - 1 gig files. The reason is the drive has to write file
information and blank spaces between each file, a time consuming,
start-stop operation, where if the number of files is small, the drive
can stream as it was meant to. The 2100 should have the horse power to
keep the drive streaming when backing up large files.
Regards,
Mitch Bertone
mbertone_at_gtech.com
>From belonis_at_dirac.phys.washington.edu Mon Aug 4 13:13:48 1997
> Hello,
> forgive me if the is in the archives, but I can't seem to find it.
>
> We have begun using an external Quantum DLT 4000 on out AS2100(du 3.2g),
> and dump doesn't seem to be able to figure out what it needs, eg
>
> dump: Dumping from host redgum.bendigo.latrobe.edu.au
> dump: Date of this level 0 dump: Sun Aug 03 16:10:56 1997 EST
> dump: Date of last level 0 dump: the start of the epoch
> dump: Dumping /dev/rre0g (/usr) to /dev/rmt0h
> dump: Mapping (Pass I) [regular files]
> dump: Mapping (Pass II) [directories]
> dump: Estimate: 10683239 tape blocks on 248.41 volume(s)
This bad estimate is apparently no problem
since all the data dumped to the tape.
But if you want some numbers to tell dump, try
-s 80000 -d 54000
which corresponds to about 30gig.
That should stop it complaining.
Then ajust the s number depending on the compression ratio you actually achieve
(i.e. when you get 'out of space' errors, reduce the 's' argument.
If you succeed in fitting more on tape, increase a gig or so).
> dump: Dumping (Pass III) [directories]
> dump: Volume 1, tape # 0001, begins with blocks from i-node 2
> dump: 0.00% done -- finished in 127:30
> dump: Dumping (Pass IV) [regular files]
> dump: 3.88% done -- finished in 02:13
> dump: 8.82% done -- finished in 01:47
> dump: 14.10% done -- finished in 01:33
> dump: 19.42% done -- finished in 01:24
> dump: 24.58% done -- finished in 01:17
> dump: 30.35% done -- finished in 01:09
> dump: 35.32% done -- finished in 01:04
> dump: 40.61% done -- finished in 00:59
> dump: 46.09% done -- finished in 00:53
> dump: 51.45% done -- finished in 00:47
> dump: 56.92% done -- finished in 00:41
> dump: 61.94% done -- finished in 00:37
> dump: 67.42% done -- finished in 00:31
> dump: 72.32% done -- finished in 00:26
> dump: 77.60% done -- finished in 00:21
> dump: 82.82% done -- finished in 00:16
> dump: 87.64% done -- finished in 00:12
> dump: 92.92% done -- finished in 00:06
> dump: 98.53% done -- finished in 00:01
> dump: Actual: 10613772 tape blocks on 1 volume(s)
> dump: Feet remaining on tape: -569985
> dump: Volumes used: 1
> dump: Rewinding and unloading tape
> dump: Unmount last tape
>
> this is a result of a dump -b 64 command.
> Despite the grunged numbers the dump seemed to work ok, but it took
> 1hr 50mins to dump an 11GB partition, which is 30-40mins longer
> than it should take.
^^^^^^^^^^^^^^
How did you decide how long it should take ?
2 hours for 11GB sounds pretty damn fantastic to me !
We have never gotten faster than 1.2MB/sec (more usually 600KB/sec)
throughput from local disks on our SGI backup system to a DLT4000.
You must have some lickity-split disks in a RAID array ?
<yes s/w 230 raid array with rz28's -- boc>
The tape drive is only rated for 1.5MB/sec or 5.4GB/hour
or 10.8GB/2hour uncompressed.
Which is pretty close to what you are getting.
Multiply by the compression ratio (probably about 1.5 - 2.0)
and something else is limiting you. Probably disk seek speed or CPU
availability.
What happens if you dump to the null device ?
I bet it isn't much faster.
> The external tape box is on a scsi bus with a tlz06 and cdrom(neither of
> which were in use at the time).
>
> Are there any customisations of the kernel, or dump parameters
> that would help me out? Is the fact that its a Quantum, rather than a
> tz88 a problem?
No. No.
You can try a bigger block size (I am unsure of the limit on DU).
But anything over 16KB or so only helps in a minor way.
Also, of course, make sure the system is doing nothing else major during dumps.
> uerf says that its a;
>
> tz4 at scsi0 bus 0 target 4 lun 0
> _(Quantum DLT4000 CD50)
>
> Thanks in advance
> boc
> --
>
> Brian O'Connor, Unix Systems Consultant
> La Trobe University,Bendigo,Australia; b.oconnor_at_latrobe.edu.au
> "If you never change your mind, why have one?"(Edward De Bono)
--
J.James(Jim)Belonis II, U of Washington Physics Computer Cost Center Manager
belonis_at_phys.washington.edu Internet University of Washington Physics Dept.
http://www.phys.washington.edu/~belonis r. B234 Physics Astronomy Building
1pm to midnite 7 days (206) 685-8695 Box 351560 Seattle, WA 98195-1560
--------------------------------------------------------------------------
Thanks again
boc
--
Brian O'Connor, Unix Systems Consultant
La Trobe University,Bendigo,Australia; b.oconnor_at_latrobe.edu.au
"If you never change your mind, why have one?"(Edward De Bono)
Received on Tue Aug 05 1997 - 00:25:51 NZST