commons-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Bodewig <>
Subject Re: [compress] LZ4 compress time to slow
Date Thu, 05 Oct 2017 15:37:23 GMT
On 2017-10-05, Simo Chiegang, Boris Arthur wrote:

> I tried simple to compress a byte array using the LZ4 compression:

> int numberReaded = tifFile.readEncodedStrip( 49, pointer, -1 );
> byte[] byteResult = pointer.getByteArray( 0, numberReaded);   // The array has a length
of 4194048 so 4Mb
> ByteArrayOutputStream outStr = new ByteArrayOutputStream();
> BlockLZ4CompressorOutputStream outputStream = new BlockLZ4CompressorOutputStream( outStr
> outputStream.write( byteResult );

> So , the execution time of the write method is more than 1
> minutes. It's not acceptable, may be it is a bug or I made somethings
> wrong?

By default the LZ4 implementation tries very hard to create the optimal
compression result. You can tweak it by using the two-arg constructor
and tweaking the parameters. Something like


> P.S: I tried another library and  I get the time under 10ms!

Unfortunately I doubt you'll get there with tunedForSpeed for biggish
arrays. You could try tweaking parameters further. You will probably
gain most by reducing maxOffset and maxBackReferenceLength.

While developing the LZ4 code I created a few benchmarks

as you can see LZ4 is quite a but slower than Snappy and the main
difference (they use the same compression code) is that Snappy is
restricts the back reference length to 64 bytes while it is almost
unlimited for LZ4. So when Snappy finds a match of 64 bytes it is done
while LZ4 keeps searching for longer matches.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message