hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Bhattacharjee <rahul.rec....@gmail.com>
Subject HDFS write failures!
Date Fri, 17 May 2013 17:10:45 GMT
Hi,

I was going through some documents about HDFS write pattern. It looks like
the write pipeline is closed when a error is encountered and the faulty
node is  taken out of the pipeline and the write continues.Few other
intermediate steps are to move the un-acked packets from ack queue to the
data queue.

My question is , is this faulty data node is reported to the NN and whether
NN would continue to use it as a valid DN while serving other write
requests in future or will it make it as faulty ?

Thanks,
Rahul

Mime
View raw message