hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo \(Nicholas\), Sze" <s29752-hadoopu...@yahoo.com>
Subject Re: hflush not working for me?
Date Fri, 09 Oct 2009 20:27:52 GMT
Hi St.Ack,

> ... soft lease to 1 second ...
You are right that you don't have to change soft lease.  It is for append but not related
to hflash.

> Do I have to do open as another user?
This should not be necessary.

Could you send me/post your test?

Nicholas Sze



From: stack <stack@duboce.net>
>To: hdfs-user@hadoop.apache.org
>Sent: Fri, October 9, 2009 1:13:37 PM
>Subject: hflush not working for me?
>
>>I'm putting together some unit tests up in our application that exercise hflush. 
I'm using minidfscluster and a jar made by building head of the 0.21 branch of hdfs (from
about a minute ago).
>
>Code opens a file, writes a bunch of edits, invokes hflush (by calling sync on DFSDataOutputStream
instance) and then, without closing the Writer, opens a Reader on same file.  This Reader
does not see any edits not to mind edits up to the sync invocation.
>
>I can trace the code and see how on hflush it sends the queued packets of edits.
>
>I studied TestReadWhileWriting.  I've set setBoolean("dfs.support.append", true) before
minidfscluster spins up.  I can't set soft lease to 1 second because not in same package so
I just wait out the default minute.  It doesn't seem to make a difference.
>
>Do I have to do open as another user?
>
>Thanks for any pointers,
>St.Ack
>
Mime
View raw message