Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 74221108D1 for ; Thu, 10 Oct 2013 07:33:26 +0000 (UTC) Received: (qmail 45103 invoked by uid 500); 10 Oct 2013 07:33:12 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 45014 invoked by uid 500); 10 Oct 2013 07:33:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 44980 invoked by uid 99); 10 Oct 2013 07:33:06 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Oct 2013 07:33:06 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of arinto@gmail.com designates 209.85.217.175 as permitted sender) Received: from [209.85.217.175] (HELO mail-lb0-f175.google.com) (209.85.217.175) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Oct 2013 07:33:01 +0000 Received: by mail-lb0-f175.google.com with SMTP id y6so1710578lbh.6 for ; Thu, 10 Oct 2013 00:32:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=mDQ2tofzd43ZZnuey9tO3m77ulu2DtmPE/w5LXiULDU=; b=J8elxfmLTBCVN6XnkiNbAqvByD9wz2vpcIoc/Bzc0bRW1HjqM1qzDyq+JRyaGNJXes NrXHn/m7FEgy9Yjw9m366gc4i00UZmZFg8Myy+ej5WHbff3I1G8mimeSIeD+9y2bfqu1 sx30Wn+BDIKmXI+3bYhXAgD8tNGEKsnwBMf+6ljDXOmnmch4SVLpaUwtu3IpB6VuoleO JHK0D168G7zA3pTDihr+b0iTKE9ndbvmunS5cVQXaaf6Y/vkyxMzCgpwSLmNJv6TqDrC Cb8AE/Dtvngztvbxf0DsYe+BM1+uybNFLO0oj0Ahrv2hPJKiG7dF/gJCf9RFY/6kqE4S IVLg== X-Received: by 10.152.115.242 with SMTP id jr18mr311623lab.40.1381390360352; Thu, 10 Oct 2013 00:32:40 -0700 (PDT) MIME-Version: 1.0 Received: by 10.112.14.73 with HTTP; Thu, 10 Oct 2013 00:32:20 -0700 (PDT) From: Arinto Murdopo Date: Thu, 10 Oct 2013 15:32:20 +0800 Message-ID: Subject: Intermittent DataStreamer Exception while appending to file inside HDFS To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c3327ab99e2904e85e015a X-Virus-Checked: Checked by ClamAV on apache.org --001a11c3327ab99e2904e85e015a Content-Type: text/plain; charset=ISO-8859-1 Hi there, I have this following exception while I'm appending existing file in my HDFS. This error appears intermittently. If the error does not show up, I can append the file successfully. If the error appears, I could not append the file. Here is the error: https://gist.github.com/arinto/d37a56f449c61c9d1d9c For your convenience, here it is: 13/10/10 14:17:30 WARN hdfs.DFSClient: DataStreamer Exception java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[10.0.106.82:50010, 10.0.106.81:50010], original=[10.0.106.82:50010, 10.0.106.81:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) Some configuration files: 1. hdfs-site.xml: https://gist.github.com/arinto/f5f1522a6f6994ddfc17#file-hdfs-append-datastream-exception-hdfs-site-xml 2. core-site.xml: https://gist.github.com/arinto/0c6f40872181fe26f8b1#file-hdfs-append-datastream-exception-core-site-xml So, any idea how to solve this issue? Some links that I've found (but unfortunately they do not help) 1. StackOverflow, our replication factor is 3 and we've never changed the replication factor since we setup the cluster. 2. Impala-User mailing list: the error here is due to replication factor set to 1. In our case, we're using replication factor = 3 Best regards, Arinto www.otnira.com --001a11c3327ab99e2904e85e015a Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi there,

I have this following ex= ception while I'm appending existing file in my HDFS. This error appear= s intermittently. If the error does not show up, I can append the file succ= essfully. If the error appears, I could not append the file.

For your convenience, here it is:
13/10/10 14:17:30 WARN hdfs.DFSC=
lient: DataStreamer Exception
java.io.IOException: Failed to add a datanode.  User may turn off this feat=
ure by setting dfs.client.block.write.replace-datanode-on-failure.policy in=
 configuration, where the current policy is DEFAULT.  (Nodes: current=3D[10.0.106.82:50010, 10.0.106.81:50010], original=3D[10.0.106.82:50010, 10.0.106.81:50010])
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFS=
OutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2Existin=
gPipeline(DFSOutputStream.java:838)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForApp=
endOrRecovery(DFSOutputStream.java:934)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream=
.java:461)

Some configuration files:
=
1. hdf=
s-site.xml: https://gist.git=
hub.com/arinto/f5f1522a6f6994ddfc17#file-hdfs-append-datastream-exception-h=
dfs-site-xml
2. core-site.xml:=
 https://gist.github.com/arinto/0c=
6f40872181fe26f8b1#file-hdfs-append-datastream-exception-core-site-xml<=
/font>
So, any idea how to solve this issue?
Some links that I've found (but unfortunately they do n= ot help)
1. StackOverfl= ow, our replication factor is 3 and we've never changed the replica= tion factor since we setup the cluster.
2. Impala-User mailing list: the error here is due to replication = factor set to 1. In our case, we're using replication factor =3D 3

Best regards,

--001a11c3327ab99e2904e85e015a--