hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wei-Chiu Chuang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11229) HDFS-11056 failed to close meta file
Date Fri, 09 Dec 2016 22:59:58 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Wei-Chiu Chuang updated HDFS-11229:
    Release Note: The fix for HDFS-111056 reads meta file to load last partial chunk checksum
when a block is converted from finalized/temporary to rbw. However, it did not close the file
explicitly. This may cause number of open files reaching system limit.

> HDFS-11056 failed to close meta file
> ------------------------------------
>                 Key: HDFS-11229
>                 URL: https://issues.apache.org/jira/browse/HDFS-11229
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.4, 3.0.0-alpha2
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>            Priority: Blocker
>         Attachments: HDFS-11229.001.patch, HDFS-11229.branch-2.patch
> The following code failed to close the file after it is read.
> {code:title=FsVolumeImpl#loadLastPartialChunkChecksum}
>     RandomAccessFile raf = new RandomAccessFile(metaFile, "r");
>     raf.seek(offsetInChecksum);
>     raf.read(lastChecksum, 0, checksumSize);
>     return lastChecksum;
> {code}
> This must be fixed because every append operation uses this piece of code. Without an
explicit close, open files can reach system limit before RandomAccessFile objects are garbage

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message