hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT
Date Mon, 18 Jan 2016 23:13:39 GMT

    [ https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15105939#comment-15105939

Colin Patrick McCabe commented on HBASE-9393:

Unfortunately, this is kind of a complex topic.

In HDFS, sockets for input streams are managed by the {{Peer}} class.  {{Peers}} can either
be "owned" by {{DFSInputStream}} objects, or stored in the {{PeerCache}}.  The {{PeerCache}}
already has appropriate timeouts and won't keep open too many sockets.  However, there is
no limit to how long a {{DFSInputStream}} could hold on to a {{Peer}}.

There are a few ways to minimize the number of open peers.
1. If HBase only ever called positional read (pread), the {{DFSInputStream}} object would
never own a {{Peer}}, so this issue would not arise.
2. If HBase called {{DFSInputStream#unbuffer}}, any open peers would be closed, even though
the stream would continue to be open.
3. If HDFS had a timeout for how long it would hold onto a {{Peer}}, that could limit the
number of open sockets.

Configuring HBase to periodically close open streams  is not necessary; it's strictly worse
than option #2.

I believe there is an option do to #1 even right now.  Can't HBase be configured just to use
pread and never read?  #2 would require a code change to HBase; #3 would require a code change
to HDFS.

Are you running out of file descriptors?  What's the user-visible problem here?

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> --------------------------------------------------------------------
>                 Key: HBASE-9393
>                 URL: https://issues.apache.org/jira/browse/HBASE-9393
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.94.2, 0.98.0
>         Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 7279 regions
>            Reporter: Avi Zrachya
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect to the
datanode because too many mapped sockets from one host to another on the same port.
> The example below is with low CLOSE_WAIT count because we had to restart hbase to solve
the porblem, later in time it will incease to 60-100K sockets on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root     17255 17219  0 12:26 pts/0    00:00:00 grep 21592
> hbase    21592     1 17 Aug29 ?        03:29:06 /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill
-9 %p -Xmx8000m -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -Dhbase.log.dir=/var/log/hbase
-Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...

This message was sent by Atlassian JIRA

View raw message