hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Seb Mo (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HADOOP-13264) Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible
Date Tue, 14 Jun 2016 20:27:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330278#comment-15330278
] 

Seb Mo edited comment on HADOOP-13264 at 6/14/16 8:26 PM:
----------------------------------------------------------

Thanks [~kihwal], but that fix does not seem to address this problem.

The DFSOutpuStream#close()->closeImlp()->flushInternal() ->checkClosed() call still
throws the lastException.get(), so going back on the stack to the DFSOutputStream#close, the
dfsClient.endFileLease(fileId) still does not get called due to the thrown exception in the
checkClosed method.

Just to make sure, I've syned the 2.7 branch and built the latest 2.7.3 on my box and re-running
my test still shows the problem being present, filesBeingWritten still keeps a reference to
the stream that could not be closed. 


was (Author: sebyonthenet):
Thanks [~kihwal]. 

The DFSOutpuStream#close()->closeImlp()->flushInternal() ->checkClosed() call still
throws the lastException.get(), so going back on the stack to the DFSOutputStream#close, the
dfsClient.endFileLease(fileId) still does not get called due to the thrown exception in the
checkClosed method.

Just to make sure, I've syned the 2.7 branch and built the latest 2.7.3 on my box and re-running
my test still shows the problem being present, then filesBeingWritten still keeps a reference
to the stream that was not closed. 

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs
datanodes are accessible 
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-13264
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13264
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.7.2
>            Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When re-using the
same DistributedFileSystem in the same JVM, if all the datanodes can't be accessed, then this
causes a memory leak as the DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message