Return-Path: Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: (qmail 58574 invoked from network); 12 Jan 2010 01:53:54 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 12 Jan 2010 01:53:54 -0000 Received: (qmail 49971 invoked by uid 500); 12 Jan 2010 01:53:54 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 49897 invoked by uid 500); 12 Jan 2010 01:53:54 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 49888 invoked by uid 99); 12 Jan 2010 01:53:53 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Jan 2010 01:53:53 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jdcryans@gmail.com designates 209.85.210.194 as permitted sender) Received: from [209.85.210.194] (HELO mail-yx0-f194.google.com) (209.85.210.194) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Jan 2010 01:53:45 +0000 Received: by yxe32 with SMTP id 32so21356659yxe.5 for ; Mon, 11 Jan 2010 17:53:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to :content-type:content-transfer-encoding; bh=YhBsMyR5RKOLe+B4nCaWkwvAQnO9nF10iXM/x0/+VHo=; b=RR+iIawZeBUlac/h7K77gHe3Muqcck2SJzSbxel8UPFzB+W5iMWcq8BLXgwxPKBMe0 hiGBg7798oEcmg1IL3+g1yoNRrthcsOMgNb7q4B8jnLWagN6QuPPb4y6UECsk2H0+MaO tcy9DhDUBVf1Esit2qcng51MB0iH2MvbpHGqI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; b=BDdmThv16LoQeuZdN5D7MDYh4x/rETM+MuWPxJCbc6NB8y9IxCaNvhyF9alHiMui8+ Gs25k1VpM0pafLl0v0u6NsZCZvcLF7v+bgFHzhCnubNOnynrp1hLyO0CWC29guiOmuiB K2BUAydixT7Kxx9fbfFvERT+p7GyVX4zvADxM= MIME-Version: 1.0 Sender: jdcryans@gmail.com Received: by 10.91.32.1 with SMTP id k1mr3589401agj.8.1263261204859; Mon, 11 Jan 2010 17:53:24 -0800 (PST) In-Reply-To: References: <31a243e71001111642l534b9424je20f08387afc1271@mail.gmail.com> <4B4BCD32.2070406@yahoo-inc.com> <31a243e71001111733s1d136dc8t4ab62d5df170de2a@mail.gmail.com> Date: Mon, 11 Jan 2010 17:53:24 -0800 X-Google-Sender-Auth: f3c7808c4b4198ca Message-ID: <31a243e71001111753h196bf6f0l986a0bb5d8834577@mail.gmail.com> Subject: Re: Reading while appending in 0.21 From: Jean-Daniel Cryans To: hdfs-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org Hi Hairong, I now understand that this is outside of the scope of 265 (which works grea= t). What I'm thinking is that tailing seems achievable now that we have hflush, tho I'm saying this without much knowledge of the internals. J-D On Mon, Jan 11, 2010 at 5:42 PM, Hairong Kuang wr= ote: > This semantics is stated in our append specification attached to HDFS-265= . > Hflushed data are visible to new readers who open the file after the hflu= sh. > > Hairong > > On Mon, Jan 11, 2010 at 5:33 PM, Jean-Daniel Cryans > wrote: >> >> Thanks for the answer Konstantin, good to know. I also see that the >> commenter doesn't seem pleased with the fact that the file has to be >> reopened ;) >> >> So basically what I wrongly expected was a tailing feature. Searches >> in Jira don't give me a hit, would it make sense to open one? >> >> In the mean time, I'm ok with recreating the reader. >> >> Thx, >> >> J-D >> >> On Mon, Jan 11, 2010 at 5:15 PM, Konstantin Boudnik >> wrote: >> > Jean, >> > >> > I believe this is how it has been intended to work. If you take a look >> > at >> > the test src/test/hdfs/org/apache/hadoop/hdfs/TestHFlush.java in HDFS >> > workspace you'll a comment on this particular aspect in the line 124. >> > >> > Hope it helps, >> > =A0Cos >> > >> > On 1/11/10 16:42 , Jean-Daniel Cryans wrote: >> >> >> >> Hi, >> >> >> >> I'm trying to use the new hflush function from 0.21 so that a >> >> SequenceFile.Reader could read edits from a SequenceFile.Writer after >> >> a signal on a Condition. If I: >> >> >> >> create the Writer >> >> append entries >> >> hflush >> >> create the Reader >> >> next() through the entries >> >> >> >> It works fine. But after that if I only next(), using the same reader= , >> >> after appending/hflush I won't see the new edits. But, if I create a >> >> new Reader after calling hflush, it works fine. >> >> >> >> So this does not work: >> >> >> >> create the Writer >> >> append entries >> >> hflush >> >> create the Reader >> >> next() through the entries >> >> append entries >> >> hflush >> >> next() through the entries >> >> append entries >> >> hflush >> >> next() through the entries >> >> >> >> This does: >> >> >> >> create the Writer >> >> append entries >> >> hflush >> >> create the Reader >> >> next() through the entries >> >> append entries >> >> hflush >> >> create a new Reader >> >> seek >> >> next() through the entries >> >> >> >> Is that the intended behavior? >> >> >> >> Thx! >> >> >> >> J-D >> > > >