hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ankur (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (HADOOP-4346) Hadoop triggers a "soft" fd leak.
Date Fri, 10 Oct 2008 12:55:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638515#action_12638515
] 

ankur edited comment on HADOOP-4346 at 10/10/08 5:54 AM:
---------------------------------------------------------

That would explain the kind of file handle leaks we are observing when writing from Apache
HTTp server via a custom logwriter to HDFS. The log-writer opens up a file for writing every
X min and keeps writing the apache log entries that it receives through a pipe. Underneath
the log writer the DFSClient opens up a new blocking connection (DFSClient.connect()) and
transfers the data via data streamer threads. In our case X is sufficiently large to cause
the wrapped selector objects to move to "old space" and not get garbage collected because
a full GC never runs since the program never exceeds its memory requirements.

I think having this patch will alleviate the problem. Another way for client applications
is to force full GC at regular intervals. I will try forcing full GC and see if that works
for my case.

      was (Author: ankur):
    That would explain the kind of file handle leaks we are observing when writing from Apache
HTTp server via a custom logwriter to HDFS. The log-writer opens up a file for writing every
X min and keeps writing the apache log entries that it receives through a pipe. Underneath
the log writer the DFSClient opens up a new blocking connection (DFSClient.connect()) and
transfers the data via data streamer threads. In our case X is sufficiently large to cause
the wrapped selector objects to move to "old space" and not get garbage collected because
a full GC never runs since the program never exceeds its memory requirements.

I think having this patch will alleviate the problem. Another way for client applications
is to force full GC at regular intervals. Will forcing full GC and see if that works for my
case.
  
> Hadoop triggers a "soft" fd leak. 
> ----------------------------------
>
>                 Key: HADOOP-4346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4346
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: HADOOP-4346-branch-18.patch, HADOOP-4346.patch, HADOOP-4346.patch,
HADOOP-4346.patch
>
>
> Starting with Hadoop-0.17, most of the network I/O uses non-blocking NIO channels. Normal
blocking reads and writes are handled by Hadoop and use our own cache of selectors. This cache
suites well for Hadoop where I/O often occurs on many short lived threads. Number of fds consumed
is proportional to number of threads currently blocked. 
> If blocking I/O is done using java.*, Sun's implementation uses internal per-thread selectors.
These selectors are closed using {{sun.misc.Cleaner}}. Looks like this cleaning is kind of
like finalizers and tied to GC. This is pretty ill suited if we have many threads that are
short lived. Until GC happens, number of these selectors keeps growing. Each selector consumes
3 fds.
> Though blocking read and write are handled by Hadoop, {{connect()}} is still the default
implementation that uses per-thread selector. 
> Koji helped a lot in tracking this. Some sections from 'jmap' output and other info 
Koji collected led to this suspicion and will include that in the next comment.
> One solution might be to handle connect() also in Hadoop using our selectors.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message