Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 90D4CDE45 for ; Sat, 10 Nov 2012 23:33:12 +0000 (UTC) Received: (qmail 46852 invoked by uid 500); 10 Nov 2012 23:33:12 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 46799 invoked by uid 500); 10 Nov 2012 23:33:12 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 46733 invoked by uid 99); 10 Nov 2012 23:33:12 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 10 Nov 2012 23:33:12 +0000 Date: Sat, 10 Nov 2012 23:33:12 +0000 (UTC) From: "Suresh Srinivas (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <100339739.97181.1352590392275.JavaMail.jiratomcat@arcas> In-Reply-To: <15764769.148771292458021796.JavaMail.jira@thor> Subject: [jira] [Commented] (HDFS-1539) prevent data loss when a cluster suffers a power loss MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13494800#comment-13494800 ] Suresh Srinivas commented on HDFS-1539: --------------------------------------- Nicholas, compared the backported patch with the original. It looks good. +1 for the patch. We should get this into 1.1.1. > prevent data loss when a cluster suffers a power loss > ----------------------------------------------------- > > Key: HDFS-1539 > URL: https://issues.apache.org/jira/browse/HDFS-1539 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node, hdfs client, name-node > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Fix For: 0.23.0 > > Attachments: syncOnClose1.txt, syncOnClose2_b-1.txt, syncOnClose2.txt > > > we have seen an instance where a external outage caused many datanodes to reboot at around the same time. This resulted in many corrupted blocks. These were recently written blocks; the current implementation of HDFS Datanodes do not sync the data of a block file when the block is closed. > 1. Have a cluster-wide config setting that causes the datanode to sync a block file when a block is finalized. > 2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour, i.e. cause the datanode to sync a block-file when it is finalized. > 3. Implement the FSDataOutputStream.hsync() to cause all data written to the specified file to be written to stable storage. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira