Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3AD7E11624 for ; Sat, 30 Aug 2014 22:43:54 +0000 (UTC) Received: (qmail 59236 invoked by uid 500); 30 Aug 2014 22:43:53 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 59038 invoked by uid 500); 30 Aug 2014 22:43:53 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 58825 invoked by uid 99); 30 Aug 2014 22:43:53 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 30 Aug 2014 22:43:53 +0000 Date: Sat, 30 Aug 2014 22:43:53 +0000 (UTC) From: "Vladislav Falfushinsky (JIRA)" To: hdfs-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (HDFS-6953) HDFS file append failing in single node configuration MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Falfushinsky resolved HDFS-6953. ------------------------------------------ Resolution: Fixed Release Note: The issue can be closed. When running C++ application it is needed to set CLASSPATH variable in unix environment that will contain HADOOP_CONF_DIR and all jar`s from HADOOP. > HDFS file append failing in single node configuration > ----------------------------------------------------- > > Key: HDFS-6953 > URL: https://issues.apache.org/jira/browse/HDFS-6953 > Project: Hadoop HDFS > Issue Type: Bug > Environment: Ubuntu 12.01, Apache Hadoop 2.5.0 single node configuration > Reporter: Vladislav Falfushinsky > Attachments: Main.java, core-site.xml, hdfs-site.xml, test_hdfs.c > > > The following issue happens in both fully distributed and single node setup. > I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) about simiral issue in multinode cluster and made some changes of my configuration however it does not changed anything. The configuration files and application sources are attached. > Steps to reproduce: > $ ./test_hdfs > 2014-08-27 14:23:08,472 WARN [Thread-5] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception > java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532) > FSDataOutputStream#close error: > java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184) > at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532) > I have tried to run a simple example in java, that uses append function. It failed too. > I have tried to get hadoop environment settings from java application. It has shown the default ones. Not the settings that ones that are mentioned in core-site.xml and hdfs-site.xml files. -- This message was sent by Atlassian JIRA (v6.2#6252)