hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Giovanni Marzulli <giovanni.marzu...@ba.infn.it>
Subject Writing failure test. Client and datanode on the same machine.
Date Mon, 13 Feb 2012 08:46:41 GMT
Hi,

I'm testing HDFS (0.20.203) on my cluster.

Particularly, I'm conducting writing tests while a datanode is down, but 
not yet marked dead by namenode.
If I write a file by client on the same machine where datanode process 
was killed, the task fails and these logs are printed:

/...INFO hdfs.DFSClient: Exception in createBlockOutputStream 
java.net.ConnectException: Connection refused... /(for every retries)/

/Inoticed that client insists on writing on local datanode exhausting 
all retries (dfs.client.write.block.retries is set to 100) until writing 
aborts!
Is this correct behavior? After N failed retries, it shouldn't contact 
another datanode to avoid writing abort?

Thanks

Gianni

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message