hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Buttler, David" <buttl...@llnl.gov>
Subject RE: Effect of turning major compactions off..
Date Thu, 27 May 2010 22:16:46 GMT
I imagine that you have developed some tools to help you dig through a log file.  Is there
tips and techniques that you can recommend to make it easier to scan through?
Dave


-----Original Message-----
From: jdcryans@gmail.com [mailto:jdcryans@gmail.com] On Behalf Of Jean-Daniel Cryans
Sent: Wednesday, May 26, 2010 4:01 PM
To: user@hbase.apache.org
Subject: Re: Effect of turning major compactions off..

You can also post a full region server log file, we don't mind
digging. Put it on a web server.

J-D

On Wed, May 26, 2010 at 12:03 PM, Vidhyashankar Venkataraman
<vidhyash@yahoo-inc.com> wrote:
> These are my current configs:
> hbase.regionserver.handler.count    100   (Default was a much smaller number like
25 or something)
> hbase.hregion.memstore.block.multiplier 4
> hbase.hstore.blockingStoreFiles    16 (default was 4.. Could this be the reason? But
I don't see any IOExceptions in my log)
> hbase.hregion.majorcompaction     691200000  (major compactions off)
> hfile.block.cache.size    0.5   (default was 0.2)
>
> As for the logs, I do see a lot of
> 2010-05-26 05:46:05,938 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread:
Compaction requested for region DocData,32016328,1274284525421/910285966 because: Region has
too many store files
>
> But no IOExceptions.. And these compaction requests were turned off since they hadnt
crossed the ttl..
>
> I will post the more relevant pieces of the log..
> Vidhya
>
>
> On 5/26/10 10:19 AM, "Jonathan Gray" <jgray@facebook.com> wrote:
>
> If you can post the logs somewhere that would be very helpful.
>
> At 2000 regions/node you probably need to continue to increase the ulimit.  You might
also need more handlers in the RS and DN.
>
>> -----Original Message-----
>> From: Vidhyashankar Venkataraman [mailto:vidhyash@yahoo-inc.com]
>> Sent: Wednesday, May 26, 2010 10:09 AM
>> To: user@hbase.apache.org
>> Subject: Re: Effect of turning major compactions off..
>>
>> No OOME or HDFS errors that I can see in the logs..
>> I turned major compaction on and restarted Hbase : now the RS's arent
>> shutting down: Compactions are happening..
>>
>> I had set the ulimit to 8000 a while back.. Should I increase it more
>> then? (With the current setting, each region can have a max of around 4
>> open files if there are 2000 regions per node)...
>>
>> Let me also check the logs a little more carefully and get back to the
>> forum..
>>
>> Thank you
>> Vidhya
>>
>>
>> On 5/26/10 9:38 AM, "Jean-Daniel Cryans" <jdcryans@apache.org> wrote:
>>
>> I'm pretty sure something else is going on.
>>
>> 1) What does it log when it shuts down? Zookeeper session timeout?
>> OOME? HDFS errors?
>>
>> 2) Is your cluster meeting all the requirements? Especially the last
>> bullet point? See
>> http://*hadoop.apache.org/hbase/docs/r0.20.4/api/overview-
>> summary.html#requirements
>>
>> J-D
>>
>> On Wed, May 26, 2010 at 9:07 AM, Vidhyashankar Venkataraman
>> <vidhyash@yahoo-inc.com> wrote:
>> > Are there any side effects to turning major compactions off, other
>> than just a hit in the read performance?
>> >
>> > I was trying to merge a 120 Gig update (modify/insert/delete
>> operations) into a 2 TB fully compacted Hbase table with 5 region
>> servers using a map reduce job.. Each RS was serving around 2000
>> regions (256 MB max size)... Major compactions were turned off before
>> the job started (by setting the compaction period very high to around 4
>> or 5 days)..
>> >
>> > As the job was going on, the region servers just shut down after the
>> table reached near-100% fragmentation (as shown in the web interface)..
>> On looking at the RS logs, I saw that there were compaction checks for
>> each region which obviously didn't clear, and the RS's shut down soon
>> after the checks..  I tried restarting the database after killing the
>> map reduce job (still, with major compactions turned off).. The RS's
>> shut down soon after booting up..
>> >
>> >   Is this expected? Even if the update files (the additional
>> StoreFiles) per region get huge, won't the region get split on its own?
>> >
>> > Thank you
>> > Vidhya
>> >
>
>
>


Mime
View raw message