hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vladimir Rodionov <vladrodio...@gmail.com>
Subject Re: Spikes when writing data to HBase
Date Tue, 11 Aug 2015 16:58:17 GMT
Monitor GC events (application stop time). Your RS may have nonoptimal
hotspot GC settings. Search Internet on how to tune GC large heaps.

-Vlad

On Tue, Aug 11, 2015 at 9:54 AM, Vladimir Rodionov <vladrodionov@gmail.com>
wrote:

> *Common questions:*
>
>
>    1. How large is your single write?
>    2. Do you see any RegionTooBusyException in a client log files
>    3. How large is your table ( # of regions, # of column families)
>    4. RS memory related config: Max heap
>    5. memstore size (if not default - 0.4)
>
>
> Memstore flush
>
> hbase.hregion.memstore.flush.size = 256M
> hbase.hregion.memstore.block.multiplier = N (do not block writes) N *
> 256M MUST be greater than overall memstore size (HBASE_HEAPSIZE *
> hbase.regionserver.global.memstore.size)
>
> WAL files.
>
> Set HDFS block size to 256MB. hbase.regionserver.hlog.blocksize = 0.95
> HDFS block size (256MB * 0.95). Keep hbase.regionserver.hlog.blocksize *
> hbase.regionserver.maxlogs just a bit above hbase.regionserver.global.memstore.lowerLimit
> (0.35-0.45) * HBASE_HEAPSIZE to avoid premature memstore flushing.
>
> *Do you see any region splits?  *
>
> Region split blocks writes. Try to presplit table and avoid splitting
> after that. Disable splitting completely
>
> hbase.regionserver.region.split.policy
> =org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy
>
> -Vlad
>
>
>
>
> On Tue, Aug 11, 2015 at 3:22 AM, Serega Sheypak <serega.sheypak@gmail.com>
> wrote:
>
>> Hi, we are using version 1.0.0+cdh5.4.4+160
>> We have heavy write load, ~ 10K per econd
>> We have 10 nodes 7 disks each. I read some perf notes, they state that
>> HBase can handle 1K per second writes per node without any problems.
>>
>>
>> I see some spikes on "writers". Write operation timing "jumps" from
>> 40-50ms
>> to 200-500ms Probably I hit memstore limit. RegionServer starts to flush
>> memstore and stop to accept updates.
>>
>> I have several questions:
>> 1. Does 4/(8 in hyperthreading) CPU + 7HDD node could absorb 1K writes per
>> second?
>> 2. What is the right way to fight with blocked writes?
>> 2.1. What I did:
>> hbase.hregion.memstore.flush.size to 256M to produce larger HFiles when
>> flushing memstore
>> base.hregion.memstore.block.multiplier to 4, since I have only one
>> intensive-write table. Let it grow
>> hbase.regionserver.optionallogflushinterval to 10s, i CAN loose some data,
>> NP here. The idea that I reduce I/O pressure on disks.
>> ===
>> Not sure if I can correctly play with these parameters.
>> hbase.hstore.blockingStoreFiles=10
>> hbase.hstore.compactionThreshold=3
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message