hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: Hbase 0.19 failed to start: exceeds the limit of concurrent xcievers 3000
Date Wed, 28 Jan 2009 14:01:54 GMT
Genady,

Some comments.

Try a bigger heap size, something like 2GB.

Set the handler count to 4, that thing eats a lot of memory.

430 regions for 3 nodes is really a lot and HBase currently opens a lot of
files. Try increasing the max file size of your tables so that it takes
longer to split and therefore have less regions. Search the list on how to
do that.

Also do you happen to have a lot of families in your tables?

Thx,

J-D

On Wed, Jan 28, 2009 at 8:53 AM, Genady <genadyg@exelate.com> wrote:

> Hi,
>
>
>
> It seems that HBase 0.19 on Hadoop 0.19 fail to start because of exceeding
> limit of concurrent xceivers( in hadoop datanode logs), which is currently
> 3000, setting more than 3000 xceivers is causing JVM out of memory
> exception, is there is something wrong with configuration parameters of
> cluster( three nodes, 430 regions,Hadoop heap size is default - 1GB)?
> Additional parameters in hbase configuration are:
>
> dfs.datanode.handler.count = 6,
>
> dfs.datanode.socket.write.timeout=0
>
>
>
> java.io.IOException: xceiverCount 3001 exceeds the limit of concurrent
> xcievers 3000
>
>        at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:87)
>
>        at java.lang.Thread.run(Thread.java:619)
>
>
>
> Any help is very appreciated,
>
> Genady
>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message