SUMMARY: Do large file systems cause problems with dump?

From: Jim Wright <jwright_at_phy.ucsf.edu>
Date: Thu, 2 Feb 1995 11:36:17 -0800 (PST)

I reiterated a question I found in the archives, but with no summary.
Here's what I found out.

My problem was that when dumping a 4 GB filesystem I would see something
like:

> | dump: Dumping from host foobar.cs.colorado.edu
> | dump: Date of this level 0 dump: Sat Apr 02 02:23:05 1994 MST
> | dump: Date of last level 0 dump: the start of the epoch
> | dump: Dumping /dev/rrz8e (/tmp_mnt/home/foobar) to standard output
> | dump: Mapping (Pass I) [regular files]
> | dump: Corrupted directory, i-node: 2
> | dump: SIGTERM received -- Try rewriting
> | dump: Unexpected signal -- cannot recover
> | dump: The ENTIRE dump is aborted

The disk fsck'd clean, and I could dump dozens of other filesystems
with no such problem.

There was no real concurrence as to the answer. My experience says that
it truly is a function of filesystem size. Dumping a nearly full 2 GB
system worked fine for me. Dumping a nearly empty 4 GB filesystem failed
every time.

I solved the problem by taking dump and restore from the v3.0 cdrom and
putting them on my v2.0 system. Works fine! Quick and easy fix, since
I won't be ready to install v3.0 for some time. Thanks to everyone who
answered.

===== From: alan_at_nabeth.cxo.dec.com =====

        That's an OSF V2.0 bug that is independent of file system
        size and even how full the file system is. It is fixed
        in V3.0 and a patch should be available from the CSC.

===== From: anthony baxter <anthony.baxter_at_aaii.oz.au> =====

How about making them advfs, and trying with vdump? advfs seems to work
nicely now...

===== From: Mike Iglesias <iglesias_at_draco.acs.uci.edu> =====

We had a problem like that, and DEC said to upgrade to OSF/1 v2.1 or
v3.0. I don't remember what the problem was.

===== From: "Jeffrey S. Jewett" <spider_at_umd5.umd.edu> =====

I don't think that it's the filesystem > 2GB that's the problem, but
rather that dumping it produces a *dumpfile* > 2GB, and you can't have
*files* > 2GB. We have encountered this exact problem backing up
Solaris systems with large filesystems. We survived for awhile by
compressing the filesystem as it was generated (dump | compress), but
eventually reality ganged up on us.

The solution is to move to something like NetWorker (DECnsr) that backs
up on a file-by-file basis.

===== From: CARL HILLSMAN <CARL_at_MOHAWK.WIC.EPA.GOV> =====

there is a dump patch from dec that might take care of this. i had a dump
problem, though not the same as yours, but i think i recall on the the issues
addressed was similar to yours, call your dec rep or get back to me iand i can
look in my tape archive and see if i can find what you need.

===== From: Selden E Ball Jr <SEB_at_LNS62.LNS.CORNELL.EDU> =====

Dump does fail on some filesystems larger than 2GB for "early" versions
of OSF/1. I don't recall if a fixed version was shipped with OSF/1 V2.0.
If not, certainly a patched version of dump was available.
It is fixed in OSF/1 V3.0.

=====

Jim Wright Keck Center for Integrative Neuroscience
jwright_at_keck.ucsf.edu Department of Physiology, Box 0444
voice 415-502-4874 513 Parnassus Ave, Room HSE-811
fax 415-502-4848 UCSF, San Francisco, CA 94143-0444
Received on Thu Feb 02 1995 - 14:43:06 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:45 NZDT