hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-970) FSImage writing should always fsync before close
Date Sat, 15 May 2010 05:05:45 GMT

     [ https://issues.apache.org/jira/browse/HDFS-970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

dhruba borthakur updated HDFS-970:
----------------------------------

           Status: Resolved  (was: Patch Available)
     Hadoop Flags: [Reviewed]
    Fix Version/s: 0.22.0
       Resolution: Fixed

I just committed this, thanks Todd.

> FSImage writing should always fsync before close
> ------------------------------------------------
>
>                 Key: HDFS-970
>                 URL: https://issues.apache.org/jira/browse/HDFS-970
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.1, 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Critical
>             Fix For: 0.22.0
>
>         Attachments: hdfs-970.txt
>
>
> Without an fsync, it's common that filesystems will delay the writing of metadata to
the journal until all of the data blocks have been flushed. If the system crashes while the
dirty pages haven't been flushed, the file is left in an indeterminate state. In some FSs
(eg ext4) this will result in a 0-length file. In others (eg XFS) it will result in the correct
length but any number of data blocks getting zeroed. Calling FileChannel.force before closing
the FSImage prevents this issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message