hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dhruba Borthakur <dhr...@gmail.com>
Subject Re: How does HDFS handle a failed Datanode during write?
Date Tue, 04 Jan 2011 05:56:55 GMT
each packet has an offset in the file that it is supposed to be written to.
So, there is no hard in resending the same packet twice, the receiving
datanode would always write this packet to the correct offset in the
destination file.

If B crashes during the write, the client does not know whether the write
was successful  at all the replicas. So, the client bumps up the generation
stamp of the block and then *resends* all the pending packets to all the


On Mon, Jan 3, 2011 at 12:49 AM, Sean Bigdatafun

>  I'd like to understand how HDFS handle Datanode failure gracefully. Let's
> suppose a replication factor of 3 is used in HDFS for this discussion.
> After 'DataStreamer' receives a list of Datanodes A, B, C for a block, it
> starts pulling data packets off the 'data queue' and putting it onto 'ack
> queue' after sending them off the wire to those Datanodes (using a pipeline
> mechansim Client -> A -> B -> C). If the Datanode B crashes during the
> writing, why the client need to put the data packets in the 'ack queue'
> back to the 'data queue'? (how can the client guarantee the order of resent
> packet on Datanode A after all?)
> I guess I have not fully understood the write failure handling mechanism
> yet. Can someone give a detailed explanation?
> Thanks,
> --
> --Sean
> --
> --Sean

Connect to me at http://www.facebook.com/dhruba

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message