hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-200) In HDFS, sync() not yet guarantees data available to the new readers
Date Fri, 11 Sep 2009 08:53:57 GMT

    [ https://issues.apache.org/jira/browse/HDFS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12754052#action_12754052
] 

dhruba borthakur commented on HDFS-200:
---------------------------------------

I think this is the scenario that you are facing:

1. The file is written for the first time and was not closed. The writer closed the file but
at this time only one of the three replicas have checked in with the namenode.
2. The new writer invoked append() to write more data into the file. The new writer found
the one remaining replica of the block, stamped a new generation stamp for this block, made
it ready to receive new data for this file and lease recovery is successful. The stamping
of the new generation stamp essentially invaidated the other two replicas of this block....this
block now has only one valid replica. The namenode won't  start replicating this block till
the block is full. If this sole datanode now goes down, then the file will be "missing a block".
This is what you folks encountered.

One option is to set dfs.replication.min to 2. This will ensure that closing a file (step
1) will be successful only when at least two replcias of the block have checked in with the
namenode. This should reduce the probability of this problem occuring. Another option is to
set the replication factor of the hbase log file(s) to be greater than 3.



> In HDFS, sync() not yet guarantees data available to the new readers
> --------------------------------------------------------------------
>
>                 Key: HDFS-200
>                 URL: https://issues.apache.org/jira/browse/HDFS-200
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: dhruba borthakur
>            Priority: Blocker
>         Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt, fsyncConcurrentReaders11_20.txt,
fsyncConcurrentReaders12_20.txt, fsyncConcurrentReaders13_20.txt, fsyncConcurrentReaders14_20.txt,
fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch, fsyncConcurrentReaders5.txt,
fsyncConcurrentReaders6.patch, fsyncConcurrentReaders9.patch, hadoop-stack-namenode-aa0-000-12.u.powerset.com.log.gz,
hdfs-200-ryan-existing-file-fail.txt, hypertable-namenode.log.gz, namenode.log, namenode.log,
Reader.java, Reader.java, reopen_test.sh, ReopenProblem.java, Writer.java, Writer.java
>
>
> In the append design doc (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc),
it says
> * A reader is guaranteed to be able to read data that was 'flushed' before the reader
opened the file
> However, this feature is not yet implemented.  Note that the operation 'flushed' is now
called "sync".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message