[SUMMARY] max file size for tar

From: Susan Rodriguez <SUSROD_at_HBSI.COM>
Date: Thu, 05 Mar 1998 10:32:21 -0800

Thanks for quick responses from:

warren_at_atmos.washington.edu
sanghvi_at_proto.wilm.ge.com
alan_at_nabeth.cxo.dec.com
webster_at_ssdpdc.lgb.cal.boeing.com

Basically, I am being told that yes, there is a limit on files that can
be archived with tar. I'll include Alan's explanation of why that is:

From: alan_at_nabeth.cxo.dec.com

The tar header uses the ASCII representation of file size in
12 octal digits. This limits the file size to 8 GB. Some
20 years ago when the format was created, that was enough.
Unfortunately, it isn't today, but exchange formats don't
lend themselves to this sort of change. GNUtar may offer
an option for writing a header format that isn't as portable,
but offers a wider range of features.

Equally unfortunate is that most archive formats are about
the same age or older may not allow for large file sizes.
You might check to see if cpio has a binary header format
which may allow larger file sizes. Failing that try ANSI
label tapes. With a large enough record size it may allow
very large files.
-------------------------------------------------------

I guess I'll be looking into other tools. Perhaps dd or cpio.



Suggestions from other:

If you are backing up a 16GB file, do you really want to use a tool that
invalidates the entire archive when it hits a bad tape block (and does
no
checking for said bad blocks)? You might want to try NSR, you should
have a standalone license just for having DU. NSR should break the
file down into a number of 2GB blocks while saving it (this is done
for compatability with systems that can't handle the idea of soemthing
larger than 2GB). The file should restore just fine, but I can't say
that I have tried a 16GB file.

Otherwise, if you have the space and time, you can always use GNU's
split
utility to break the file up into managable parts. Then just cat them
back
together on the other end....

Tom
--
+-----------------------------------+---------------------------------+
| Tom Webster                       |  "Funny, I've never seen it     |
| SysAdmin MDA-SSD ISS-IS-HB-S&O    |   do THAT before...."           |
| webster_at_ssdpdc.lgb.cal.boeing.com |   - Any user support person     |
+-----------------------------------+---------------------------------+
|      Unless clearly stated otherwise, all opinions are my own.      |
+---------------------------------------------------------------------+
Also from Alan:
I looked at the ltf(5) (format of ANSI labeled tapes) from an
ULTRIX system and it looks like the maximum file size for an
ANSI tape is around 19 GB.  It seems to have a maximum record
size of 20480 with 999,999 records.
The other choice would be to simply use dd(1) to write the
files to tape at some appropriate block size.  Use the no-
rewind device in between files and file size shouldn't be
an issue.  For cataloging, you might try writing a small
tar archive for each large file and put those files before
each of the large ones:
% echo "A description of 'bigfile'" > bigfile.txt
% ls -las bigfile >> bigfile.txt
% tar cbf 128 bigfile.tar bigfile.txt
% dd if=bigfile.tar of=/dev/nrmt0h bs=64k
% dd if=bigfile of=/dev/nrmt0h bs=64k
----------------------------------------------------------------------
From:  belonis_at_dirac.phys.washington.edu
I don't know about tar filesizes, but you can always use 
dd
to write files to tape.  It may or may not have filesize limits of its
own,
and will not include directory information with the file, so write the 
ls -l
output as a separate file.
ORIGINAL POST:
Managers,
I am installing DEC patch c970426-1401.tar.Z to fix a problem with the
tar utility.  The readme file indicates that the patch will cause tar to
handle files greater that 4 GB correctly and also seems to imply that
the max size of a tar file is 8 GB.  
Excerpt from README:
Modify tar to correctly handle a file whose size exceeds 4GB.
Previously, this
would quietly cause corruption of the fileheader, thus producing a
useless
tarset (a tar extract would not be able to read the tarset).
Additionally
added code to detect attempts to exceed the maximum size of a file (8GB
as
described by the tar fileheader), and to warn when it is necessary to
truncate 
a file that is too large.
Am I reading this correctly?  Is 8 GB really as big as you can get with
tar?  What about gnutar?  Does it suffer the same limitation?
I have to have a tool which can write individual files of up to 16 GB to
tape.  If tar is not the answer, I would appreciate any suggestions.
TIA,
susrod_at_hbsi.com
- consistency is the defense of a small mind
susrod_at_hbsi.com
- consistency is the defense of a small mind
Received on Thu Mar 05 1998 - 19:31:55 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:37 NZDT