commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lasse Collin <lasse.col...@tukaani.org>
Subject Re: [compress] ZIP64: API imposed limits vs limits of the format
Date Thu, 04 Aug 2011 16:25:20 GMT
On 2011-08-04 Stefan Bodewig wrote:
> There are a few places where our implementation doesn't allow for the
> full range the ZIP format would support.  Some are easy to fix, some
> hard and I'm asking for feedback whether you consider it worth the
> effort to fix them at all.

I guess that these are enough for the foreseeable future:

    Max archive size:             Long.MAX_VALUE
    Max size of individual entry: Long.MAX_VALUE
    Max number of file entries:   Integer.MAX_VALUE

Java APIs don't suppport bigger files and I guess that so big files
won't be common even if file system sizes allowed them. If you write
ten terabytes per second, it will still take well over a week to
create an archive of 2^63-1 bytes.

I don't know how much memory one file entry needs, but let's assume
it takes only 50 bytes, including the overhead of the linked list
etc. Keeping a list of 2^31-1 files will then need 100 GiB of RAM.
While it might be OK in some situations, I hope such archives won't
become common. ;-) Even if the number of files is limited to
Integer.MAX_VALUE, it can be good to think about the memory usage
of the data structures used for the file entries.

-- 
Lasse Collin  |  IRC: Larhzu @ IRCnet & Freenode

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Mime
View raw message