hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Niels Basjes <Ni...@basjes.nl>
Subject Re: Should splittable Gzip be a "core" hadoop feature?
Date Wed, 29 Feb 2012 15:55:22 GMT

On Wed, Feb 29, 2012 at 13:10, Michel Segel <michael_segel@hotmail.com>wrote:

> Let's play devil's advocate for a second?

I always like that :)

> Why?

Because then datafiles from other systems (like the Apache HTTP webserver)
can be processed without preprocessing more efficiently.

Snappy exists.

Compared to gzip: Snappy is faster, compresses a bit less and is
unfortunately not splittable.

The only advantage is that you don't have to convert from gzip to snappy
> and can process gzip files natively.

Yes, that and the fact that the files are smaller.
Note that I've described some of these considerations in the javadoc.

Next question is how large are the gzip files in the first place?

I work for the biggest webshop in the Netherlands and I'm facing a set of
logfiles that are very often > 1 GB each.... and are gzipped.
The first thing we do with then is parse and disect each line in the very
first mapper. Then we store the result in (snappy compressed) avro files.

I don't disagree, I just want to have a solid argument in favor of it...


Best regards / Met vriendelijke groeten,

Niels Basjes

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message