hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: region server died after inserting big data
Date Thu, 16 Feb 2012 17:02:46 GMT
On Wed, Feb 15, 2012 at 10:44 PM, Tianwei <tianwei.sheng@gmail.com> wrote:
> limit, it will die. From the log, it seems that the compact/split will fail
> due to the memory problem:
>  2012-02-15 21:39:58,013 ERROR
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction/Split
> failed for region test_table,ali
> ramos,1329364709394.5e1b41c1ea5e87d75fbac2e5fb26e68b.
>  I know very little about the internal implementation of hbase, could you
> guys give me some suggestions for the following questions:
>   1. Why the memory usage of the region server keep increasing? Is it
> simply because I am writing big data into the hbase table?  Which parts of
> hbase will use more memory for increasing table size? Are there any
> configuration options for me to alleviate this problem?
>   2. Why the region server die? is it because the GC is not quick enough
> to free memory for hbase? I assume that writing data, compacting/spliting
> all need to allocate new memory. And if the GC is not quick enough, these
> function will simply get exception and cause the region server to die. Is
> that right?

Need more logs from regionserver and you should enable GC logging.

Heap increases as you put more data in (more regions, more memstores).
 Its natural!

My guess is you are losing your session with zookeeper because of a
big GC pause.  Have you done any GC tuning?  Using default configs?

> We used hbase-0.90.1 version and this problem really bothers us a lot. Hope
> you can give us some suggestions.

Update your hbase.  That'll probably help too.


View raw message