Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 53785 invoked from network); 13 Jan 2009 18:49:37 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 13 Jan 2009 18:49:37 -0000 Received: (qmail 14627 invoked by uid 500); 13 Jan 2009 18:49:31 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 14583 invoked by uid 500); 13 Jan 2009 18:49:31 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 14572 invoked by uid 99); 13 Jan 2009 18:49:31 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Jan 2009 10:49:31 -0800 X-ASF-Spam-Status: No, hits=3.4 required=10.0 tests=HTML_MESSAGE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: 74.125.46.30 is neither permitted nor denied by domain of jonathanc@rockyou.com) Received: from [74.125.46.30] (HELO yw-out-2324.google.com) (74.125.46.30) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Jan 2009 18:49:24 +0000 Received: by yw-out-2324.google.com with SMTP id 9so75159ywe.29 for ; Tue, 13 Jan 2009 10:49:03 -0800 (PST) Received: by 10.142.174.8 with SMTP id w8mr12980973wfe.225.1231872542708; Tue, 13 Jan 2009 10:49:02 -0800 (PST) Received: by 10.142.166.18 with HTTP; Tue, 13 Jan 2009 10:49:02 -0800 (PST) Message-ID: Date: Tue, 13 Jan 2009 10:49:02 -0800 From: "Jonathan Cao" To: core-user@hadoop.apache.org Subject: Re: HDFS read/write question In-Reply-To: <94c23a010901130959k5373ee7aqf6bf80da3fb84027@mail.gmail.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_128937_32604739.1231872542027" References: <94c23a010901130959k5373ee7aqf6bf80da3fb84027@mail.gmail.com> X-Virus-Checked: Checked by ClamAV on apache.org ------=_Part_128937_32604739.1231872542027 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline I encountered the same issue before (not only the append operation failed, the appended file became corrupted after the append operation), my test indicated this issue only showed up when the file size is small (as in your case. i.e. less than a block). The append seems to work fine with large files (~100M). Jonathan On Tue, Jan 13, 2009 at 9:59 AM, Manish Katyal wrote: > I'm trying out the new append feature ( > https://issues.apache.org/jira/browse/HADOOP-1700). > [Hadoop 0.19, distributed mode with a single data node] > > The following scenario as per the JIRA documentation (Appends.doc) *i > assume* it should have worked but does not: > > ...//initialize FileSystem fs > //*(1) create a new file* > FSDataOutputStream os = fs.create(name, true, fs.getConf().getInt( > "io.file.buffer.size", 4096), fs.getDefaultReplication(), fs > .getDefaultBlockSize(), null); > os.writeUTF("hello"); > os.flush(); > os.close(); *//closed* > > //*(2) open* the file for append > os = fs.append(name); > os.writeUTF("world"); > os.flush(); //file is *not* closed > > //*(3) read existing data from the file* > DataInputStream dis = fs.open(name); > String data = dis.readUTF(); > dis.close(); > System.out.println("Read: " + data); //*expected "hello"* > > //finally close > os.close(); > > I get an exception: hdfs.DFSClient: Could not obtain block > blk_3192362259459791054_10766 from any node... (on *Step 3:*). > What am I missing? > > Thanks. > - Manish > ------=_Part_128937_32604739.1231872542027--