[NCLUG] fixing corrupt cpio archive

Matthew Wilcox willy at debian.org
Tue Nov 20 08:25:32 MST 2001


On Mon, Nov 19, 2001 at 08:36:30PM -0700, S. Luke Jones wrote:
> > zcat stopped feeding bytes to cpio.  cpio wasn't expecting the end of
> > the archive to occur at the point that zcat gave up.  You shouldn't
> 
> To be sure, the example could lead one to that conclusion, but

um, no, that _is_ what happens.

> in fact the same thing occurs if I do
> 	$ zcat foo.cpio.Z | cpio ...
> or
> 	$ gunzip foo.cpio.Z
> 	$ cat foo.cpio | cpio ...
> 
> In that case it's the filesystem that stops feeding bytes to the
> file handle bash associates with the pipe to cpio.

Yes, it is.  That doesn't make him wrong.  gunzip only writes as much as it
can to the filesystem, then cpio reads that truncated file.

> Am I just weird or is it reasonable to expect an archive format
> to include sufficient redundancy to support a certain level of
> bit-error recovery? Is there a popular archive format that does?

You gzipped an archive format.  You specifically asked to have any
redundancy thrown away at that point.

> (These days I mainly use zip, because it's cross platform: Windows,
> all forms of UN*X, and the Java standard library. Also, I detest
> tar's command-line semantics. (I hear great things about the
> simple-to-decode byte layout however.))

if you want to save stuff long-term, make multiple copies.  preferably on
different kinds of media.  i'm told cd-rw doesn't last long-term, and DAT
is still the best alternative.

-- 
Revolutions do not require corporate support.



More information about the NCLUG mailing list