hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vladimir Rodionov <vrodio...@carrieriq.com>
Subject RE: Hbase tuning for heavy write cluster
Date Sat, 25 Jan 2014 04:10:33 GMT
160 active regions?
With 16G of heap and default 0.4 for memstore your cluster makes tiny flushes ~ 40MB in size
- You can check RS log file.
Large number of small files triggers frequent minor compactions. The smaller flush size  the
more times the same data
will be read/written during compaction cycles, that is why important to keep  memstore flush
at heal;thy level.

Reducing number of active regions per server and increasing max region size will definitely
help. Make region size at least 10GB
With your 16GB of heap and default 0.4 memstore ratio (6.4G) I would decreased number of region
to 25-50.

Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodionov@carrieriq.com

From: Rohit Dev [rohitdevel14@gmail.com]
Sent: Friday, January 24, 2014 6:38 PM
To: user@hbase.apache.org
Subject: Re: Hbase tuning for heavy write cluster

Hi Kevin,

We have about 160 regions per server with 16Gig region size and 10
drives for Hbase. I've looked at disk IO and that doesn't seem to be
any problem ( % utilization is < 2 across all disk)

Any suggestion what heap size I should allocation, normally I allocate 16GB.

Also, I read increasing  hbase.hstore.blockingStoreFiles and
hbase.hregion.memstore.block.multiplier is good idea for write-heavy
cluster, but in my case it seem to be heading to wrong direction.


On Fri, Jan 24, 2014 at 6:31 PM, Kevin O'dell <kevin.odell@cloudera.com> wrote:
> Rohit,
>   64GB heap is not ideal, you will run into some weird issues. How many
> regions are you running per server, how many drives in each node, any other
> settings you changed from default?
> On Jan 24, 2014 6:22 PM, "Rohit Dev" <rohitdevel14@gmail.com> wrote:
>> Hi,
>> We are running Opentsdb on CDH 4.3 hbase cluster, with most of the
>> default settings. The cluster is heavy on write and I'm trying to see
>> what parameters I can tune to optimize the write performance.
>> # I get messages related to Memstore[1] and Slow Response[2] very
>> often, is this an indication of any issue ?
>> I tried increasing some parameters on one node:
>>  - hbase.hstore.blockingStoreFiles - from default 7 to 15
>>  - hbase.hregion.memstore.block.multiplier - from default 2 to 8
>>  - and heap size from 16GB to 64GB
>>  * 'Compaction queue' went up to ~200 within 60 mins after restarting
>> region server with new parameters and the log started to get even more
>> noisy.
>> Can anyone please suggest if I'm going to right direction with these
>> new settings ? or if there are other thing that I could monitor or
>> change to make it better.
>> Thank you!
>> [1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Blocking updates
>> for 'IPC Server handler 19 on 60020' on region
>> tsdb,\x008XR\xE0i\x90\x00\x00\x02Q\x7F\x1D\x00\x00(\x00\x0B]\x00\x008M(r\x00\x00Bl\xA7\x8C,1390556781703.0771bf90cab25c503d3400206417f6bf.:
>> memstore size 256.3 M is >= than blocking 256 M size
>> [2]
>>  WARN org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
>> {"processingtimems":17887,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@586940ea
>> ),
>> rpc version=1, client version=29,
>> methodsFingerPrint=0","client":"
>> ","starttimems":1390587959182,"queuetimems":1498,"class":"HRegionServer","responsesize":0,"method":"multi"}

Confidentiality Notice:  The information contained in this message, including any attachments
hereto, may be confidential and is intended to be read only by the individual or entity to
whom this message is addressed. If the reader of this message is not the intended recipient
or an agent or designee of the intended recipient, please note that any review, use, disclosure
or distribution of this message or its attachments, in any form, is strictly prohibited. 
If you have received this message in error, please immediately notify the sender and/or Notifications@carrieriq.com
and delete or destroy any copy of this message and its attachments.

View raw message