hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 苏铖 <such...@lietou.com>
Subject 答复: How to adjust hbase settings when too many store files?
Date Mon, 29 Oct 2012 05:33:25 GMT
I'm using hbase 0.92.0 and hadoop 0.20.205.0 .
I didn't get you about the "pre split".
I didn't create a large file which contain all data and use hbase bulk load.
I used the HTable.put to put data, 1000 records once.

Thanks.


-----邮件原件-----
发件人: yuzhihong@gmail.com [mailto:yuzhihong@gmail.com] 
发送时间: 2012年10月29日 12:31
收件人: user@hbase.apache.org
抄送: <user@hbase.apache.org>
主题: Re: How to adjust hbase settings when too many store files?

What version of hbase were you using ?
Did you pre split the table before loading ?

Thanks



On Oct 28, 2012, at 8:33 PM, 苏铖 <sucheng@lietou.com> wrote:

> Hello. I encounter a region server error when I try to put bulk data from
a
> java client.
> 
> The java client extracts data from a relational database and puts those
data
> into hbase.
> 
> When I try to extract data from a large table(say, 1 billion records), the
> error happens.
> 
> 
> 
> The region server's log says:
> 
> 
> 
>> 2012-10-28 00:00:02,169 WARN
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Region
> statistic_visit_detail,20120804|72495|8549
> 
> 56,1351353594195.ad2592ee7a3610c60c47cf8be77496c8. has too many store
files;
> delaying flush up to 90000ms
> 
>> 2012-10-28 00:00:02,791 DEBUG
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush thread woke up
> because memory above low wa
> 
> ter=347.1m
> 
>> 2012-10-28 00:00:02,791 DEBUG
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Under global heap
> pressure: Region statistic_vis
> 
>
it_detail,20120804|72495|854956,1351353594195.ad2592ee7a3610c60c47cf8be77496
> c8. has too many store files, but is 141.5m vs best flus
> 
> hable region's 46.8m. Choosing the bigger.
> 
>> 2012-10-28 00:00:02,791 INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region
> statistic_visit_detail,20120804|7
> 
> 2495|854956,1351353594195.ad2592ee7a3610c60c47cf8be77496c8. due to global
> heap pressure
> 
> ...
> 
> 
> 
> And finally, 
> 
> 
> 
>> 2012-10-28 00:00:43,511 INFO
org.apache.hadoop.hbase.regionserver.HRegion:
> compaction interrupted by user
> 
>> java.io.InterruptedIOException: Aborting compaction of store cf1 in
region
> statistic_visit_detail,20120804|72495|854956,135135359419
> 
> 5.ad2592ee7a3610c60c47cf8be77496c8. because user requested stop.
> 
>        at
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1275)
> 
>        at
> org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:765)
> 
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1023)
> 
>        at
>
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(Compa
> ctionRequest.java:177)
> 
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.
ja
> va:886)
> 
>        at
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9
> 08)
> 
>        at java.lang.Thread.run(Thread.java:662)
> 
> 
> 
> Then the region server shuts down.
> 
> 
> 
> It seems that too many store files(due to too many records from
> relational-db) consumed too many memories, if I'm right.
> 
> I'm new to hbase, what settings should I adjust? Or even increase region
> servers?
> 
> I'm going to do some research by myself, and any advise will be
appreciated.
> 
> Best regards,
> 
> 
> 
> Su
> 
> 
> 


Mime
View raw message