Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 69518 invoked from network); 28 Jan 2009 09:09:26 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 28 Jan 2009 09:09:26 -0000 Received: (qmail 43784 invoked by uid 500); 28 Jan 2009 09:09:23 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 43772 invoked by uid 500); 28 Jan 2009 09:09:23 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 43706 invoked by uid 99); 28 Jan 2009 09:09:23 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Jan 2009 01:09:22 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Jan 2009 09:09:20 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id C8B78234C4B9 for ; Wed, 28 Jan 2009 01:08:59 -0800 (PST) Message-ID: <1282267335.1233133739821.JavaMail.jira@brutus> Date: Wed, 28 Jan 2009 01:08:59 -0800 (PST) From: "Doug Judd (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-4379) In HDFS, sync() not yet guarantees data available to the new readers In-Reply-To: <901820613.1223514224222.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12667972#action_12667972 ] Doug Judd commented on HADOOP-4379: ----------------------------------- Dhruba, The application is not trying to reopen a file that it already has open. It appears that HDFS is getting confused and thinks that this is the case. One thing that is slightly different in this case is that nothing gets written to the file by the original writer. Here is the sequence of relevant operations: File gets created [nothing gets appended to it] process gets killed process starts up again File is reopened with append to obtain length Can you verify that this particular usage pattern is handled properly? I'll try to come up with a stripped down test case tomorrow. - Doug > In HDFS, sync() not yet guarantees data available to the new readers > -------------------------------------------------------------------- > > Key: HADOOP-4379 > URL: https://issues.apache.org/jira/browse/HADOOP-4379 > Project: Hadoop Core > Issue Type: New Feature > Components: dfs > Reporter: Tsz Wo (Nicholas), SZE > Assignee: dhruba borthakur > Fix For: 0.19.1 > > Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt, fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch, hypertable-namenode.log.gz, Reader.java, Reader.java, Writer.java, Writer.java > > > In the append design doc (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it says > * A reader is guaranteed to be able to read data that was 'flushed' before the reader opened the file > However, this feature is not yet implemented. Note that the operation 'flushed' is now called "sync". -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.