Do large file systems cause problems with dump?

From: Jim Wright <jwright_at_phy.ucsf.edu>
Date: Mon, 23 Jan 1995 15:38:30 -0800 (PST)

I looked in the archives and found an exact description of the problem
I'm currently facing. Unfortunately I could not find any answers. Only
difference is I have a Seagate Barracuda with a 4 GB filesystem.

> Do large file systems cause problems with dump?
>
> Dirk Grunwald (grunwald_at_foobar.cs.colorado.edu)
> Sat, 2 Apr 1994 16:29:12 -0700
>
> I recently got fed up with running out of space and bought one of
> those nice $2400 4GB DSP5400's from Anda-taco (I got a free SPARC
> version of `Liken' with it - great deal).
>
> Anyway, I created a very large file system:
>
> [foobar-25] df ~
> Filesystem 512-blocks Used Avail Capacity Mounted on
> /dev/rz8e 5176630 1036696 3622270 22% /tmp_mnt/home/foobar
>
> about 2.8GB. Now, when we try to do dumps (OSF/1 V2.0), I get the
> following message:
>
> | dump: Dumping from host foobar.cs.colorado.edu
> | dump: Date of this level 0 dump: Sat Apr 02 02:23:05 1994 MST
> | dump: Date of last level 0 dump: the start of the epoch
> | dump: Dumping /dev/rrz8e (/tmp_mnt/home/foobar) to standard output
> | dump: Mapping (Pass I) [regular files]
> | dump: Corrupted directory, i-node: 2
> | dump: SIGTERM received -- Try rewriting
> | dump: Unexpected signal -- cannot recover
> | dump: The ENTIRE dump is aborted
>
> I fsck'd the file system and everything is clean.
>
> So, here's my question: does `dump' barf on very large file systems?
> I have this sneaking suspicion that it doesn't like filesystems
> greater than 2Gb.

Help!

Jim Wright Keck Center for Integrative Neuroscience
jwright_at_keck.ucsf.edu Department of Physiology, Box 0444
voice 415-502-4874 513 Parnassus Ave, Room HSE-811
fax 415-502-4848 UCSF, San Francisco, CA 94143-0444
Received on Mon Jan 23 1995 - 18:38:01 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:45 NZDT