hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Shvachko <...@yahoo-inc.com>
Subject Re: Question about the Namenode edit log and syncing the edit log to disk. 0.19.0
Date Wed, 07 Jan 2009 20:04:26 GMT
 From Java documentation
"Passing false for this parameter indicates that only updates to the file's content need be
written to storage; passing true  indicates that updates to both the file's content and metadata
must be written, which generally requires at least one more I/O 
See also a comment here

We are trying to avoid extra (meta-data) io during sync.
This is why "s" is not  appropriate here.
We do not use "d" option because HDFS controls the syncs internally:
the modifications are batched and then sync-ed together.
More info here:

Thanks for the question.

Jason Venner wrote:
> I have always assumed (which is clearly my error) that edit log writes 
> were flushed to storage to ensure that the edit log was consistent 
> during machine crash recovery.
> I have been working through FSEditLog.java and I don't see any calls of 
> force(true) on the file channel or sync on the file descriptor, and the 
> edit log is not opened with an 's' or 'd' ie: the open flags are "rw" 
> and not "rws" or "rwd".
> The only thing I see in the code, is that the space in the file where 
> the updates will be written is preallocated.
> Have I missed the mechanism that the edit log data is flushed to the disk?
> Is the edit log data not forcibly flushed to the disk, instead reling on 
> the host operating system to perform the physical writes at a later date?
> Thanks -- Jason

View raw message