hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladislav Falfushinsky (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6953) HDFS file append failing in single node configuration
Date Wed, 27 Aug 2014 11:32:57 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vladislav Falfushinsky updated HDFS-6953:
-----------------------------------------

    Description: 
The following issue happens in both fully distributed and single node setup. 
I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral
issue in multinode cluster and made some changes of my configuration however it does not changed
anything. The configuration files and application sources are attached.

Steps to reproduce:

$ ./test_hdfs

2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628)) -
DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
FSDataOutputStream#close error:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)

I have tried to run a simple example in java, that uses append function. It failed too.

I have tried to get hadoop environment settings from java application. It has shown the default
ones. Not the settings that ones that are mentioned in core-site.xml and hdfs-site.xml files.


  was:
The following issue happens in both fully distributed and single node setup. 
I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral
issue in multinode cluster and made some changes of my configuration however it does not changed
anything. The configuration files and application sources are attached.
Steps to reproduce:

$ ./test_hdfs
2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628)) -
DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
FSDataOutputStream#close error:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)

Also I have tried a simle example in java, that uses append function. It failed too.

Then I`ve tried to get hadoop environment settings from my application. It has shown the default
ones. Not the settings that are mentioned in *site.xml files.



> HDFS file append failing in single node configuration
> -----------------------------------------------------
>
>                 Key: HDFS-6953
>                 URL: https://issues.apache.org/jira/browse/HDFS-6953
>             Project: Hadoop HDFS
>          Issue Type: Bug
>         Environment: Ubuntu 12.01, Apache Hadoop 2.5.0 single node configuration
>            Reporter: Vladislav Falfushinsky
>         Attachments: Main.java, core-site.xml, hdfs-site.xml, test_hdfs.c
>
>
> The following issue happens in both fully distributed and single node setup. 
> I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral
issue in multinode cluster and made some changes of my configuration however it does not changed
anything. The configuration files and application sources are attached.
> Steps to reproduce:
> $ ./test_hdfs
> 2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628))
- DataStreamer Exception
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> FSDataOutputStream#close error:
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> I have tried to run a simple example in java, that uses append function. It failed too.
> I have tried to get hadoop environment settings from java application. It has shown the
default ones. Not the settings that ones that are mentioned in core-site.xml and hdfs-site.xml
files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message