Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 91206F433 for ; Fri, 3 May 2013 18:02:16 +0000 (UTC) Received: (qmail 74214 invoked by uid 500); 3 May 2013 18:02:16 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 74147 invoked by uid 500); 3 May 2013 18:02:16 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 74137 invoked by uid 99); 3 May 2013 18:02:16 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 03 May 2013 18:02:16 +0000 Date: Fri, 3 May 2013 18:02:16 +0000 (UTC) From: "Kihwal Lee (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-3436) adding new datanode to existing pipeline fails in case of Append/Recovery MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648630#comment-13648630 ] Kihwal Lee commented on HDFS-3436: ---------------------------------- I saw this happening when the second DN in a pipeline becomes very slow due to hardware issues. It will cause all downstream dn to fail in writes and pipeline recoveries and eventually itself will fail. Adding a new dn and transferring block fails since recoverRbw() in previous failed recovery attempt modified the gen stamp. I am glad it's already fixed in branch-2. I am pulling into branch-0.23. HDFS-4257 would have made writes to succeed, but at the same time this bug might have gone undetected. It's fortunate this was fixed before HDFS-4257. > adding new datanode to existing pipeline fails in case of Append/Recovery > -------------------------------------------------------------------------- > > Key: HDFS-3436 > URL: https://issues.apache.org/jira/browse/HDFS-3436 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 2.0.0-alpha, 3.0.0 > Reporter: Brahma Reddy Battula > Assignee: Vinay > Fix For: 2.0.2-alpha > > Attachments: HDFS-3436-trunk.patch > > > Scenario: > ========= > 1. Cluster with 4 DataNodes. > 2. Written file to 3 DNs, DN1->DN2->DN3 > 3. Stopped DN3, > Now Append to file is failing due to addDatanode2ExistingPipeline is failed. > *CLinet Trace* > {noformat} > 2012-04-24 22:06:09,947 INFO hdfs.DFSClient (DFSOutputStream.java:createBlockOutputStream(1063)) - Exception in createBlockOutputStream > java.io.IOException: Bad connect ack with firstBadLink as *******:50010 > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1053) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:943) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > 2012-04-24 22:06:09,947 WARN hdfs.DFSClient (DFSOutputStream.java:setupPipelineForAppendOrRecovery(916)) - Error Recovery for block BP-1023239-10.18.40.233-1335275282109:blk_296651611851855249_1253 in pipeline *****:50010, ******:50010, *****:50010: bad datanode ******:50010 > 2012-04-24 22:06:10,072 WARN hdfs.DFSClient (DFSOutputStream.java:run(549)) - DataStreamer Exception > java.io.EOFException: Premature EOF: no length prefix available > at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:866) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:843) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > 2012-04-24 22:06:10,072 WARN hdfs.DFSClient (DFSOutputStream.java:hflush(1515)) - Error while syncing > java.io.EOFException: Premature EOF: no length prefix available > at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:866) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:843) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > java.io.EOFException: Premature EOF: no length prefix available > at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:866) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:843) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > {noformat} > *DataNode Trace* > {noformat} > 2012-05-17 15:39:12,261 ERROR datanode.DataNode (DataXceiver.java:run(193)) - host0.foo.com:49744:DataXceiver error processing TRANSFER_BLOCK operation src: /127.0.0.1:49811 dest: /127.0.0.1:49744 > java.io.IOException: BP-2001850558-xx.xx.xx.xx-1337249347060:blk_-8165642083860293107_1002 is neither a RBW nor a Finalized, r=ReplicaBeingWritten, blk_-8165642083860293107_1003, RBW > getNumBytes() = 1024 > getBytesOnDisk() = 1024 > getVisibleLength()= 1024 > getVolume() = E:\MyWorkSpace\branch-2\Test\build\test\data\dfs\data\data1\current > getBlockFile() = E:\MyWorkSpace\branch-2\Test\build\test\data\dfs\data\data1\current\BP-2001850558-xx.xx.xx.xx-1337249347060\current\rbw\blk_-8165642083860293107 > bytesAcked=1024 > bytesOnDisk=102 > at org.apache.hadoop.hdfs.server.datanode.DataNode.transferReplicaForPipelineRecovery(DataNode.java:2038) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.transferBlock(DataXceiver.java:525) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opTransferBlock(Receiver.java:114) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:78) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189) > at java.lang.Thread.run(Unknown Source) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira