commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Bodewig <bode...@apache.org>
Subject Re: [compress] ZIP64: API imposed limits vs limits of the format
Date Fri, 05 Aug 2011 12:49:47 GMT
On 2011-08-04, Torsten Curdt wrote:

>> ZipFile relies on RandomAccessFile so any archive can't be bigger than
>> the maximum size supported by RandomAccessFile.  In particular the seek
>> method expects a long as argument so the hard limit would be an archive
>> size of 2^63-1 bytes.  In practice I expect RandomAccessFile to not
>> support files that big on many platforms.

> Yeah ... let's cross that bridge when people complain ;)

With that I can certainly live.

>> For the streaming mode offsets are currently stored as longs but that
>> could be changed to BigIntegers easily so we could reach 2^64-1 at the
>> expense of memory consumption and maybe even some performance issues
>> (the offsets are not really used in calculations so I don't expect any
>> major impact).

> No insights on the implementation but that might be worth changing so
> it's in line with the ZipFile impl

ZipFile is already limited to longs via RandomAccessFile.

>> I'm confident that even I would manage to write an efficient singly
>> linked list that is only ever appended to and that is iterated over
>> exactly once from head to tail.

> +1 for that then :)

Lasse's post showing that I'd need 100+ GB of RAM to take advantage of
my bigger LinkedList made me drop that plan 8-)

If anybody is really dealing with archives that big they likely don't
use Commons Compress and if they do then support for archives split into
multiple files might be more important.

Stefan

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Mime
View raw message