hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: Datanode Xceivers
Date Mon, 12 Jan 2009 07:34:19 GMT
Luo Ning, over the weekend, has made some comments you might be 
interested in over in HBASE-24 Jean-Adrien.

Jean-Adrien wrote:
> Hi everybody,
> I saw that you put some advises concerning the Hadoop settings when one has
> a problem of max xceivers reached, in the troubleshooting section of the
> wiki.
> About this topic, I recently post a question in hadoop-core user mailing
> list about their 'xcievers' thread behavior, since I still had to increase
> their amount as my HBase table grows, in order to avoid to reach the limit
> at startup time. And therefore my jvm use a lot of virtual memory (actually
> with 500MB for the heap, 1100 threads allocate 2GB virtual memory). This
> evenutally yields to swap and failure.
> Here is the link to my post. With a graph showing the number of thread the
> datanode creates when I start hbase.
> http://www.nabble.com/xceiverCount-limit-reason-td21349807.html#a21352818
> You can see that all threads are created at HBase startup time, and, if the
> timeout ( dfs.datanode.socket.write.timeout
> ) is set, they all ends with a timeout failure.
> The question for HBase is, why are the connection with hadoop kept open (and
> the thread as well) ? Does it happen only in my case ?
> I think that Slava has the same problem. But I don't think everybody does,
> since the cluster could not run without disabling the timeout parameter
> dfs.datanode.socket.write.timeout
> Anybody made those observations ?
> Thanks
> Jean-Adrien

View raw message