Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 78F67200C86 for ; Wed, 31 May 2017 21:21:11 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 77610160BDB; Wed, 31 May 2017 19:21:11 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 97208160BCB for ; Wed, 31 May 2017 21:21:10 +0200 (CEST) Received: (qmail 4393 invoked by uid 500); 31 May 2017 19:21:09 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 4382 invoked by uid 99); 31 May 2017 19:21:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 May 2017 19:21:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 4B777CDB24 for ; Wed, 31 May 2017 19:21:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.752 X-Spam-Level: X-Spam-Status: No, score=-99.752 tagged_above=-999 required=6.31 tests=[KAM_LOTSOFHASH=0.25, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id RglzP7xAWNKN for ; Wed, 31 May 2017 19:21:08 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 577376105B for ; Wed, 31 May 2017 19:21:07 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 9073FE0DEF for ; Wed, 31 May 2017 19:21:06 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id E881E2409F for ; Wed, 31 May 2017 19:21:04 +0000 (UTC) Date: Wed, 31 May 2017 19:21:04 +0000 (UTC) From: "Vinayakumar B (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-5042) Completed files lost after power failure MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 31 May 2017 19:21:11 -0000 [ https://issues.apache.org/jira/browse/HDFS-5042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-5042: -------------------------------- Attachment: HDFS-5042-branch-2.7-06.patch Updated branch-2.7 patch as well. > Completed files lost after power failure > ---------------------------------------- > > Key: HDFS-5042 > URL: https://issues.apache.org/jira/browse/HDFS-5042 > Project: Hadoop HDFS > Issue Type: Bug > Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5) > Reporter: Dave Latham > Assignee: Vinayakumar B > Priority: Critical > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HDFS-5042-01.patch, HDFS-5042-02.patch, HDFS-5042-03.patch, HDFS-5042-04.patch, HDFS-5042-05-branch-2.patch, HDFS-5042-05.patch, HDFS-5042-branch-2-01.patch, HDFS-5042-branch-2-05.patch, HDFS-5042-branch-2.7-05.patch, HDFS-5042-branch-2.7-06.patch, HDFS-5042-branch-2.8-05.patch, HDFS-5042-branch-2.8-06.patch > > > We suffered a cluster wide power failure after which HDFS lost data that it had acknowledged as closed and complete. > The client was HBase which compacted a set of HFiles into a new HFile, then after closing the file successfully, deleted the previous versions of the file. The cluster then lost power, and when brought back up the newly created file was marked CORRUPT. > Based on reading the logs it looks like the replicas were created by the DataNodes in the 'blocksBeingWritten' directory. Then when the file was closed they were moved to the 'current' directory. After the power cycle those replicas were again in the blocksBeingWritten directory of the underlying file system (ext3). When those DataNodes reported in to the NameNode it deleted those replicas and lost the file. > Some possible fixes could be having the DataNode fsync the directory(s) after moving the block from blocksBeingWritten to current to ensure the rename is durable or having the NameNode accept replicas from blocksBeingWritten under certain circumstances. > Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode): > {noformat} > RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c with permission=rwxrwxrwx > NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c. blk_1395839728632046111_357084589 > DN 2013-06-29 11:16:06,832 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: /10.0.5.237:50010 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to blk_1395839728632046111_357084589 size 25418340 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to blk_1395839728632046111_357084589 size 25418340 > DN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received block blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327 > DN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block blk_1395839728632046111_357084589 terminating > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c from client DFSClient_hb_rs_hs745,60020,1372470111932 > NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c is closed by DFSClient_hb_rs_hs745,60020,1372470111932 > RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c to hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c > RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: Completed major compaction of 7 file(s) in n of users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into 6e0cc30af6e64e56ba5a539fdf159c4c, size=24.2m; total size for store is 24.2m > ------- CRASH, RESTART --------- > NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: addStoredBlock request received for blk_1395839728632046111_357084589 on 10.0.6.1:50010 size 21978112 but was rejected: Reported as block being written but is a block of closed file. > NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_1395839728632046111 is added to invalidSet of 10.0.6.1:50010 > NN 2013-06-29 12:01:20,155 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: addStoredBlock request received for blk_1395839728632046111_357084589 on 10.0.5.237:50010 size 16971264 but was rejected: Reported as block being written but is a block of closed file. > NN 2013-06-29 12:01:20,155 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_1395839728632046111 is added to invalidSet of 10.0.5.237:50010 > NN 2013-06-29 12:01:20,175 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: addStoredBlock request received for blk_1395839728632046111_357084589 on 10.0.6.24:50010 size 21913088 but was rejected: Reported as block being written but is a block of closed file. > NN 2013-06-29 12:01:20,175 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_1395839728632046111 is added to invalidSet of 10.0.6.24:50010 > (note the clock on the server running DN is wrong after restart. I believe timestamps are off by 6 hours:) > DN 2013-06-29 06:07:22,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block blk_1395839728632046111_357084589 file /data/hadoop/dfs/data/blocksBeingWritten/blk_1395839728632046111 for deletion > DN 2013-06-29 06:07:24,952 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block blk_1395839728632046111_357084589 at file /data/hadoop/dfs/data/blocksBeingWritten/blk_1395839728632046111 > {noformat} > There was some additional discussion on this thread on the mailing list: > http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201307.mbox/%3CCA+qbEUPuf19PL_EVeWi1104+scLVrcS0LTFUvBPw=qcuXnZ8hQ@mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org