I sent a message stating that we had a
problem performing backups on an Alpha
OSF/1 3.2 system, raid 5 SWXCR Controller
and ADVFS using both Networker and Vdump.
It was taking 36 hours to backup 18 Gig.
The 18 Gig consist of a lot (1.5 Million +)
2K files.
Several people responded that they were taking
3 - 6 hours to backup 15 to 20 Gig. I don't think
they had as many little files as we did.
Through more testing at hour site, we populated
a raid with a lot of large 1 to 2 meg files and backups
were running like they should.
When we populated the raid with lots and lots
of 2K files, the transfer rate went in the toilet.
DEC finally came back and said this would happen.
(No real warm & fuzzy from them). They recommended
that we rewrite our application to roll some
of these small files into larger files.
(Nice answer) But it looks like we will have to
do that anyway.. We cannot live with backups
taking 36 hours.
Below is a response from a DEC Engineer:
"
Each file probably requires being opened, a record that it
was backed up logged, read, (possibly written, but the writes
may be collected) and then closed, which will want to change
the access time and therefore a write back to the file system.
There's probably as much work going to log which files are
getting backed up as there to back the files. It isn't
that surprising that the backup is slow. To get a benchmark
for the best backups times you can get from such a file system
try:
UFS: dump to /dev/null
UFS: tar to /dev/null
Advfs: vdump to /dev/null
Advfs: tar to /dev/null
Don't use any options that allow verbose logging of the files
that get written. Writing those to a terminal will slow the
backup down even more. Dump on UFS has the slight advantage
that it doesn't have to open the files or change the modification
date.
The results may show that while reasonable (but probably not good)
backup speeds can be achieved, they have to different tools and
file systems than they really want. They might want to consider
redesigning their software to put a bunch of those 2 KB files
into a few larger files and then seek to the appropriate 2 KB
offset. If their I/O load is mostly reads of such files, this
redesign might have some performance advantages.
"
If anybody else has anything to add to this please respond to
palmer_at_mdsol1.mdc.com
Received on Fri May 17 1996 - 23:08:02 NZST