hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1539) prevent data loss when a cluster suffers a power loss
Date Wed, 22 Dec 2010 21:31:06 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12974400#action_12974400

Hairong Kuang commented on HDFS-1539:

Yes, should
+        this.cout = new BufferedOutputStream(streams.checksumOut, 
+                                                  SMALL_BUFFER_SIZE);
 be this.cout = streams.checksumOut?

> prevent data loss when a cluster suffers a power loss
> -----------------------------------------------------
>                 Key: HDFS-1539
>                 URL: https://issues.apache.org/jira/browse/HDFS-1539
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client, name-node
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: syncOnClose1.txt
> we have seen an instance where a external outage caused many datanodes to reboot at around
the same time.  This resulted in many corrupted blocks. These were recently written blocks;
the current implementation of HDFS Datanodes do not sync the data of a block file when the
block is closed.
> 1. Have a cluster-wide config setting that causes the datanode to sync a block file when
a block is finalized.
> 2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour,
i.e. cause the datanode to sync a block-file when it is finalized.
> 3. Implement the FSDataOutputStream.hsync() to cause all data written to the specified
file to be written to stable storage.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message