hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Verbose logging with compression
Date Wed, 12 Jan 2011 19:12:10 GMT
We have HBASE-1900 marked against 0.92.
St.Ack

On Tue, Jan 11, 2011 at 7:57 PM, Matt Corgan <mcorgan@hotpads.com> wrote:
> Sounds like all upside to me... was a little tricky to notice since it still
> compresses without them
>
> Matt
>
>
> On Tue, Jan 11, 2011 at 10:14 PM, Stack <stack@duboce.net> wrote:
>
>> Oh.  Yeah.  Makes sense.  We used to bundle the native libs but we
>> seem to have dropped them.  We should add them back?
>> St.Ack
>>
>> On Tue, Jan 11, 2011 at 3:24 PM, Matt Corgan <mcorgan@hotpads.com> wrote:
>> > Turns out this is what happens if you don't have the native libraries set
>> up
>> > correctly.  The data still gets compressed using the pure java codec, but
>> it
>> > doesn't cache the codec and gives you a warning each time it creates it
>> for
>> > each block.
>> >
>> >
>> > On Mon, Jan 10, 2011 at 2:41 PM, Stack <stack@duboce.net> wrote:
>> >
>> >> Thats a little silly.  That the message is INFO level is probably
>> >> small potatoes when doing a mapreduce job but in our case with lots of
>> >> file openings, it turns into a little log storm.
>> >>
>> >> I suppose you'll need to disable it.  Set log level to WARN on
>> >> org.apache.hadoop.io.compress?
>> >>
>> >> This might help you making the change:
>> >> http://wiki.apache.org/hadoop/Hbase/FAQ#A5
>> >>
>> >> St.Ack
>> >>
>> >> On Mon, Jan 10, 2011 at 9:46 AM, Matt Corgan <mcorgan@hotpads.com>
>> wrote:
>> >> > I'm trying to use GZIP compression but running into a logging problem.
>> >>  It
>> >> > appears that every time a block is compressed it logs the following:
>> >> >
>> >> > 2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> > 2011-01-10 12:40:48,414 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> > 2011-01-10 12:40:48,420 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> > 2011-01-10 12:40:48,426 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> > 2011-01-10 12:40:48,431 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> > 2011-01-10 12:40:48,447 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> > 2011-01-10 12:40:48,453 INFO org.apache.hadoop.io.compress.CodecPool:
>> Got
>> >> > brand-new compressor
>> >> >
>> >> > Same for decompression.  It's logging that 150 times per second during
>> a
>> >> > major compaction which pretty much renders the logs useless.  I assume
>> >> other
>> >> > people are not having this problem, so did we accidentally enable that
>> >> > logging somehow?
>> >> >
>> >> > Thanks,
>> >> > Matt
>> >> >
>> >>
>> >
>>
>

Mime
View raw message