hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer ...@apache.org>
Subject Re: Will blocks of an unclosed file get lost when HDFS client (or the HDFS cluster) crashes?
Date Mon, 14 Mar 2011 16:21:55 GMT
	No.

	If a close hasn't been committed to the file, the associated blocks/files disappear in both
client crash and namenode crash scenarios.  


On Mar 13, 2011, at 10:09 PM, Sean Bigdatafun wrote:

> I meant an HDFS chunk (the size of 64MB), and I meant the version of
> 0.20.2 without append patch.
> 
> I think even without the append patch, the previous 64MB blocks (in my
> example, the first 5 blocks) should be safe. Isn't it?
> 
> 
> On 3/13/11, Ted Dunning <tdunning@maprtech.com> wrote:
>> What do you mean by block?  An HDFS chunk?  Or a flushed write?
>> 
>> The answer depends a bit on which version of HDFS / Hadoop you are using.
>> With the append branches, things happen a lot more like what you expect.
>> Without that version, it is difficult to say what will happen.
>> 
>> Also, there are very few guarantees about what happens if the namenode
>> crashes.  There are some provisions for recovery, but none of them really
>> have any sort of transactional guarantees.  This means that there may be
>> some unspecified time before the writes that you have done are actually
>> persisted in a recoverable way.
>> 
>> On Sun, Mar 13, 2011 at 9:52 AM, Sean Bigdatafun
>> <sean.bigdatafun@gmail.com>wrote:
>> 
>>> Let's say an HDFS client starts writing a file A (which is 10 blocks
>>> long) and 5 blocks have been writen to datanodes.
>>> 
>>> At this time, if the HDFS client crashes (apparently without a close
>>> op), will we see 5 valid blocks for file A?
>>> 
>>> Similary, at this time if the HDFS cluster crashes, will we see 5
>>> valid blocks for file A?
>>> 
>>> (I guess both answers are yes, but I'd have some confirmation :-)
>>> --
>>> --Sean
>>> 
>> 
> 
> 
> -- 
> --Sean


Mime
View raw message