Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8CB939800 for ; Wed, 11 Apr 2012 17:55:43 +0000 (UTC) Received: (qmail 49771 invoked by uid 500); 11 Apr 2012 17:55:43 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 49744 invoked by uid 500); 11 Apr 2012 17:55:43 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 49734 invoked by uid 99); 11 Apr 2012 17:55:43 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Apr 2012 17:55:43 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Apr 2012 17:55:41 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 29C163655AA for ; Wed, 11 Apr 2012 17:55:20 +0000 (UTC) Date: Wed, 11 Apr 2012 17:55:20 +0000 (UTC) From: "Tsz Wo (Nicholas), SZE (Updated) (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <1478310204.13449.1334166920270.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <368911571.2898.1333400125828.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Updated] (HDFS-3179) Improve the error message: DataStreamer throw an exception, "nodes.length != original.length + 1" on single datanode cluster MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-3179: ----------------------------------------- Resolution: Fixed Fix Version/s: 2.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Uma, thanks for the review. I have committed this. > Improve the error message: DataStreamer throw an exception, "nodes.length != original.length + 1" on single datanode cluster > ---------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-3179 > URL: https://issues.apache.org/jira/browse/HDFS-3179 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs client > Affects Versions: 0.23.2 > Reporter: Zhanwei.Wang > Assignee: Tsz Wo (Nicholas), SZE > Fix For: 2.0.0 > > Attachments: h3179_20120403.patch > > > Create a single datanode cluster > disable permissions > enable webhfds > start hdfs > run the test script > expected result: > a file named "test" is created and the content is "testtest" > the result I got: > hdfs throw an exception on the second append operation. > {code} > ./test.sh > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}} > {code} > Log in datanode: > {code} > 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception > java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close file /test > java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) > {code} > test.sh > {code} > #!/bin/sh > echo "test" > test.txt > curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE" > curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" > curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira