hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: xceiver count, regionserver shutdown
Date Mon, 06 Feb 2012 19:59:09 GMT
The number of regions is the first thing to check, then it's about the
actual number of blocks opened. Is the issue happening during a heavy
insert? In this case I guess you could end up with hundreds of opened
files if the compactions are piling up. Setting a bigger memstore
flush size would definitely help... but then again if your insert
pattern is random enough all 200 regions will have filled memstores so
you'd end up with hundreds of super small files...

Please tell us more about the context of when this issue happens.

J-D

On Mon, Feb 6, 2012 at 11:42 AM, Bryan Keller <bryanck@gmail.com> wrote:
> I am trying to resolve an issue with my cluster when I am loading a bunch of data into
HBase. I am reaching the "xciever" limit on the data nodes. Currently I have this set to 4096.
The data node is logging "xceiverCount 4097 exceeds the limit of concurrent xcievers 4096".
The regionservers eventually shut down. I have read the various threads on this issue.
>
> I have 4 datanodes/regionservers. Each regionserver has only around 200 regions. The
table has 2 column families. I have the region file size set to 500mb, and I'm using Snappy
compression. This problem is occurring on HBase 0.90.4 and Hadoop 0.20.2 (both Cloudera cdh3u3).
>
> From what I have read, the number of regions on a node can cause the xceiver limit to
be reached, but it doesn't seem like I have an excessive number of regions. I want the table
to scale higher, so simply upping the xceiver limit could perhaps get my table functional
for now, but it seems it will only be a temporary fix.
>
> Are number of regions the only factor that can cause this problem, or are there other
factors involved that I may be able to adjust?
>

Mime
View raw message