hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhanwei.Wang (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3179) failed to append data, DataStreamer throw an exception, "nodes.length != original.length + 1" on single datanode cluster
Date Tue, 03 Apr 2012 21:14:26 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13245755#comment-13245755
] 

Zhanwei.Wang commented on HDFS-3179:
------------------------------------

I totally agree with you about "the problem one datanode with replication 3",I think this
kind of operation should fail or at least get a warning.

My opinion is that, the purpose of "the policy check" is to make sure no potential data lose,
in this "one datanode 3 replica" case, although the first append failure will not cause the
data lose, the appended data after the first successful append is in danger because there
is only one replica which is not the user expected 3. And there is no warning to tell the
user the truth. 

My suggestion is to make the first write to the empty file fail if there is not enough datanode,
in another word, make the policy check more strictly. And make the error message more friendly
instead of "nodes.length != original.length + 1".



                
> failed to append data, DataStreamer throw an exception, "nodes.length != original.length
+ 1" on single datanode cluster
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3179
>                 URL: https://issues.apache.org/jira/browse/HDFS-3179
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.23.2
>            Reporter: Zhanwei.Wang
>            Priority: Critical
>
> Create a single datanode cluster
> disable permissions
> enable webhfds
> start hdfs
> run the test script
> expected result:
> a file named "test" is created and the content is "testtest"
> the result I got:
> hdfs throw an exception on the second append operation.
> {code}
> ./test.sh 
> {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed
to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}}
> {code}
> Log in datanode:
> {code}
> 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception
> java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010],
original=[127.0.0.1:50010]
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close file
/test
> java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010],
original=[127.0.0.1:50010]
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> {code}
> test.sh
> {code}
> #!/bin/sh
> echo "test" > test.txt
> curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE"
> curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND"
> curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND"
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

Mime
View raw message