hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-1539) prevent data loss when a cluster suffers a power loss
Date Mon, 12 Nov 2012 20:49:13 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tsz Wo (Nicholas), SZE updated HDFS-1539:
-----------------------------------------

    Fix Version/s: 1.1.1
    
> prevent data loss when a cluster suffers a power loss
> -----------------------------------------------------
>
>                 Key: HDFS-1539
>                 URL: https://issues.apache.org/jira/browse/HDFS-1539
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client, name-node
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.23.0, 1.1.1
>
>         Attachments: syncOnClose1.txt, syncOnClose2_b-1.txt, syncOnClose2.txt
>
>
> we have seen an instance where a external outage caused many datanodes to reboot at around
the same time.  This resulted in many corrupted blocks. These were recently written blocks;
the current implementation of HDFS Datanodes do not sync the data of a block file when the
block is closed.
> 1. Have a cluster-wide config setting that causes the datanode to sync a block file when
a block is finalized.
> 2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour,
i.e. cause the datanode to sync a block-file when it is finalized.
> 3. Implement the FSDataOutputStream.hsync() to cause all data written to the specified
file to be written to stable storage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message