hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Kimball (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4346) Hadoop triggers a "soft" fd leak.
Date Mon, 23 Feb 2009 22:50:02 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12676105#action_12676105

Aaron Kimball commented on HADOOP-4346:

The -branch-18 patch does not apply to Hadoop 0.18.3. Specifically, the following hunk fails
to apply in DataNode.java:

*** 1490,1496 ****
          InetSocketAddress proxyAddr = NetUtils.createSocketAddr(
          proxySock = newSocket();
-         proxySock.connect(proxyAddr, socketTimeout);

          OutputStream baseStream = NetUtils.getOutputStream(proxySock,

The lines near 1490 do not make any reference to 'proxySock'. 

The only definitions of "OutputStream baseStream" are in run(), copyBlock(), and readBlock().
Without more context, for this patch hunk, I'm not sure where this might be applied, though
the line:

targetSock.connect(targetAddr, socketTimeout);

near line 1431 seems like a reasonable candidate, as it's the only example of a fooSock.Connect()

Raghu, am I correct here? If not, can you release a new 18-branch patch?

> Hadoop triggers a "soft" fd leak. 
> ----------------------------------
>                 Key: HADOOP-4346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4346
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.20.0
>         Attachments: HADOOP-4346-branch-18.patch, HADOOP-4346.patch, HADOOP-4346.patch,
HADOOP-4346.patch, HADOOP-4346.patch, HADOOP-4346.patch, HADOOP-4346.patch
> Starting with Hadoop-0.17, most of the network I/O uses non-blocking NIO channels. Normal
blocking reads and writes are handled by Hadoop and use our own cache of selectors. This cache
suites well for Hadoop where I/O often occurs on many short lived threads. Number of fds consumed
is proportional to number of threads currently blocked. 
> If blocking I/O is done using java.*, Sun's implementation uses internal per-thread selectors.
These selectors are closed using {{sun.misc.Cleaner}}. Looks like this cleaning is kind of
like finalizers and tied to GC. This is pretty ill suited if we have many threads that are
short lived. Until GC happens, number of these selectors keeps growing. Each selector consumes
3 fds.
> Though blocking read and write are handled by Hadoop, {{connect()}} is still the default
implementation that uses per-thread selector. 
> Koji helped a lot in tracking this. Some sections from 'jmap' output and other info 
Koji collected led to this suspicion and will include that in the next comment.
> One solution might be to handle connect() also in Hadoop using our selectors.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message