hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Bigdatafun <sean.bigdata...@gmail.com>
Subject How does HDFS handle a failed Datanode during write?
Date Mon, 03 Jan 2011 08:49:34 GMT
 I'd like to understand how HDFS handle Datanode failure gracefully. Let's
suppose a replication factor of 3 is used in HDFS for this discussion.


After 'DataStreamer' receives a list of Datanodes A, B, C for a block, it
starts pulling data packets off the 'data queue' and putting it onto 'ack
queue' after sending them off the wire to those Datanodes (using a pipeline
mechansim Client -> A -> B -> C). If the Datanode B crashes during the
writing, why the client need to put the data packets in the 'ack queue'
back to the 'data queue'? (how can the client guarantee the order of resent
packet on Datanode A after all?)
I guess I have not fully understood the write failure handling mechanism
yet. Can someone give a detailed explanation?

Thanks,
-- 
--Sean





-- 
--Sean

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message