hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: LZO compression in HBase
Date Tue, 28 Jul 2009 22:06:45 GMT
Use the shell:

major_compact 'table'

good luck!

On Tue, Jul 28, 2009 at 3:03 PM, llpind<sonny_heer@hotmail.com> wrote:
>
> major_compact from the web UI by clicking on the HBase table then clicking
> Compact?
>
> Thanks
>
> Ryan Rawson wrote:
>>
>> Hi,
>>
>> You should enable LZO compression.  All performance goes up, both read
>> and write.
>>
>> Follow the instructions to get booted basic:
>> http://wiki.apache.org/hadoop/UsingLzoCompression
>>
>> Once you have your cluster restarted with the new jars and native
>> libs, disable the tables.  Then alter them to include the
>> compression=>'LZO' flag.  Re-enable them.  Kick off a major_compact on
>> the table and the new files will be in LZO.
>>
>> -ryan
>>
>> On Tue, Jul 28, 2009 at 2:23 PM, llpind<sonny_heer@hotmail.com> wrote:
>>>
>>> Hey,
>>>
>>> I have a couple tall tables (~ 120M rows each with small columns).  I was
>>> wondering what type of read performance I can expect using LZO
>>> compression?
>>>
>>> Also, is there a way to enable compression on an existing HBase table, or
>>> do
>>> I have to drop, recreate, and reload the entire data?
>>>
>>> Thanks
>>> --
>>> View this message in context:
>>> http://www.nabble.com/LZO-compression-in-HBase-tp24708137p24708137.html
>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>
> --
> View this message in context: http://www.nabble.com/LZO-compression-in-HBase-tp24708137p24708714.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Mime
View raw message