hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladislav Falfushinsky (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6953) HDFS file append failing in single node configuration
Date Sat, 30 Aug 2014 22:41:53 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14116579#comment-14116579
] 

Vladislav Falfushinsky commented on HDFS-6953:
----------------------------------------------

The issue can be closed. When running C++ application it is needed to set CLASSPATH variable
in unix environment that will contain HADOOP_CONF_DIR and all jar`s from HADOOP.

> HDFS file append failing in single node configuration
> -----------------------------------------------------
>
>                 Key: HDFS-6953
>                 URL: https://issues.apache.org/jira/browse/HDFS-6953
>             Project: Hadoop HDFS
>          Issue Type: Bug
>         Environment: Ubuntu 12.01, Apache Hadoop 2.5.0 single node configuration
>            Reporter: Vladislav Falfushinsky
>         Attachments: Main.java, core-site.xml, hdfs-site.xml, test_hdfs.c
>
>
> The following issue happens in both fully distributed and single node setup. 
> I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral
issue in multinode cluster and made some changes of my configuration however it does not changed
anything. The configuration files and application sources are attached.
> Steps to reproduce:
> $ ./test_hdfs
> 2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628))
- DataStreamer Exception
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> FSDataOutputStream#close error:
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this
via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> I have tried to run a simple example in java, that uses append function. It failed too.
> I have tried to get hadoop environment settings from java application. It has shown the
default ones. Not the settings that ones that are mentioned in core-site.xml and hdfs-site.xml
files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message