hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rainer Toebbicke <...@pclella.cern.ch>
Subject cannot create files in hdfs when -put command issued on a datanode which is in exclude list
Date Tue, 27 Jan 2015 16:49:09 GMT
Hello,


I ran into what a weird problem creating files and for the minute I only have a shaky conclusion:

logged in as a vanilla user on a datanode the simple command "hdfs dfs -put /etc/motd motd"
reproducibly bails out with

WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/tobbicke/motd._COPYING_
could only be replicated to 0 nodes instead of minReplication (=1).  There are 17 datanode(s)
running and no node(s) are excluded in this operation.


Restarting datanodes did not help, the namenode logs were rather inconclusive. Trying to follow
the only hint in there which were authentication problems from other users (we're using Kerberos)
I happend to log in to another datanode, and to my surprise (!) there everything worked smoothly.

Trying on all of them with a mix of success and failures the only conclusion I came up with
is that putting the datanode into "decommisionning" somehow affects client write access (no
problem for -get), even from ordinary users.

Is this possible? Intended even, and if yes, what would be the logic behind that (after all
I don't care on which datanodes the file ends up, there are plenty).

We're on Cloudera CDH 5.2.0, hadoop 2.5.0 in case that matters.

Any ideas?



Mime
View raw message