hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Created: (HDFS-1539) prevent data loss when a cluster suffers a power loss
Date Thu, 16 Dec 2010 00:07:01 GMT
prevent data loss when a cluster suffers a power loss

                 Key: HDFS-1539
                 URL: https://issues.apache.org/jira/browse/HDFS-1539
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: data-node, hdfs client, name-node
            Reporter: dhruba borthakur

we have seen an instance where a external outage caused many datanodes to reboot at around
the same time.  This resulted in many corrupted blocks. These were recently written blocks;
the current implementation of HDFS Datanodes do not sync the data of a block file when the
block is closed.

1. Have a cluster-wide config setting that causes the datanode to sync a block file when a
block is finalized.
2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour, i.e.
cause the datanode to sync a block-file when it is finalized.
3. Implement the FSDataOutputStream.hsync() to cause all data written to the specified file
to be written to stable storage.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message