hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ravi Prakash (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2132) Potential resource leak in EditLogFileOutputStream.close
Date Wed, 06 Jul 2011 20:07:16 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060793#comment-13060793

Ravi Prakash commented on HDFS-2132:

I am new to Hadoop so please forgive me if I do not understand the philosophies behind this
patch. If any of the close methods fail, they will throw an IOException which will be propagated
up the stack. Isn't this the way all JAVA works?
Comments on your patch
1. In normal operation all close methods within the try will be called once, and then once
again in the IOUtils.cleanup method. What purpose does this serve? I would rather the methods
be called only once. 
2. In the finally block, all IOExceptions which might have been thrown are logged, and then
programmatically swallowed. The upstream functions are never made aware of these IOExceptions
and I am not sure this is the right behavior. 

> Potential resource leak in EditLogFileOutputStream.close
> --------------------------------------------------------
>                 Key: HDFS-2132
>                 URL: https://issues.apache.org/jira/browse/HDFS-2132
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.23.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>             Fix For: 0.23.0
>         Attachments: hdfs-2132.0.patch
> {{EditLogFileOutputStream.close(...)}} sequentially closes a series of underlying resources.
If any of the calls to {{close()}} throw before the last one, the later resources will never
be closed.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message