hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arinto Murdopo <ari...@gmail.com>
Subject Intermittent DataStreamer Exception while appending to file inside HDFS
Date Thu, 10 Oct 2013 07:32:20 GMT
Hi there,

I have this following exception while I'm appending existing file in my
HDFS. This error appears intermittently. If the error does not show up, I
can append the file successfully. If the error appears, I could not append
the file.

Here is the error: https://gist.github.com/arinto/d37a56f449c61c9d1d9c
For your convenience, here it is:

13/10/10 14:17:30 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: Failed to add a datanode.  User may turn off this
feature by setting
dfs.client.block.write.replace-datanode-on-failure.policy in
configuration, where the current policy is DEFAULT.  (Nodes:
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

Some configuration files:

1. hdfs-site.xml:

2. core-site.xml:

So, any idea how to solve this issue?

Some links that I've found (but unfortunately they do not help)
1. StackOverflow<http://stackoverflow.com/questions/15347799/java-io-ioexception-failed-to-add-a-datanode-hdfs-hadoop>,
our replication factor is 3 and we've never changed the replication factor
since we setup the cluster.
2. Impala-User mailing
the error here is due to replication factor set to 1. In our case, we're
using replication factor = 3

Best regards,


View raw message