hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11229) HDFS-11056 failed to close meta file
Date Fri, 09 Dec 2016 21:34:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736407#comment-15736407
] 

Andrew Wang commented on HDFS-11229:
------------------------------------

+1 LGTM too. Please also set the targets versions in the future, since this looks like a 3.0.0-alpha2
blocker as well.

> HDFS-11056 failed to close meta file
> ------------------------------------
>
>                 Key: HDFS-11229
>                 URL: https://issues.apache.org/jira/browse/HDFS-11229
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.4
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>            Priority: Blocker
>         Attachments: HDFS-11229.001.patch
>
>
> The following code failed to close the file after it is read.
> {code:title=FsVolumeImpl#loadLastPartialChunkChecksum}
>     RandomAccessFile raf = new RandomAccessFile(metaFile, "r");
>     raf.seek(offsetInChecksum);
>     raf.read(lastChecksum, 0, checksumSize);
>     return lastChecksum;
> {code}
> This must be fixed because every append operation uses this piece of code. Without an
explicit close, open files can reach system limit before RandomAccessFile objects are garbage
collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message