hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravi...@ymail.com>
Subject Re: HDFS write failures!
Date Fri, 17 May 2013 18:14:13 GMT
Hi,

I couldn't find any code that would relay this failure to the NN. The relevant code is in
DFSOutputStream:DataStreamer:processDatanodeError()

For trunk: https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
For 0.20: http://javasourcecode.org/html/open-source/hadoop/hadoop-0.20.203.0/org/apache/hadoop/hdfs/DFSClient.DFSOutputStream.java.html


I believe the assumption here is that the NN should independently discover the failed node.
Also, some failures might not be worthy of being reported because the DN is expected to recover
from them.

Ravi.




________________________________
 From: Rahul Bhattacharjee <rahul.rec.dgp@gmail.com>
To: "user@hadoop.apache.org" <user@hadoop.apache.org> 
Sent: Friday, May 17, 2013 12:10 PM
Subject: HDFS write failures!
 


Hi,


I was going through some documents about HDFS write pattern. It looks like the write pipeline
is closed when a error is encountered and the faulty node isĀ  taken out of the pipeline and
the write continues.Few other intermediate steps are to move the un-acked packets from ack
queue to the data queue.


My question is , is this faulty data node is reported to the NN and whether NN would continue
to use it as a valid DN while serving other write requests in future or will it make it as
faulty ?


Thanks,
Rahul
Mime
View raw message