hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Giovanni Marzulli <giovanni.marzu...@ba.infn.it>
Subject Write failure. Client and datanode on the same machine.
Date Fri, 10 Feb 2012 12:32:18 GMT
Hi,

I'm testing HDFS (0.20.203) on my cluster.

Particularly, I'm conducting writing tests while a datanode is down, but 
not yet marked dead by namenode.
If I write a file by client on the same machine where datanode process 
was killed, the task fails and these logs are printed:

/...INFO hdfs.DFSClient: Exception in createBlockOutputStream 
java.net.ConnectException: Connection refused... /(for every retries)/

/Inoticed that client insists on writing on local datanode exhausting 
all retries (dfs.client.write.block.retries is set to 100) until writing 
aborts!
Is this correct behavior? After N failed retries, it shouldn't contact 
another datanode to avoid writing abort?

Thanks

Gianni


Mime
View raw message