hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Cao" <jonath...@rockyou.com>
Subject Re: HDFS read/write question
Date Tue, 13 Jan 2009 18:49:02 GMT
I encountered the same issue before (not only the append operation failed,
the appended file became corrupted after the append operation), my test
indicated this issue only showed up when the file size is small (as in your
case. i.e. less than a block). The append seems to work fine with large
files (~100M).
Jonathan

On Tue, Jan 13, 2009 at 9:59 AM, Manish Katyal <manish.katyal@gmail.com>wrote:

> I'm trying out the new append feature  (
> https://issues.apache.org/jira/browse/HADOOP-1700).
> [Hadoop 0.19, distributed mode with a single data node]
>
> The following scenario as per the JIRA documentation (Appends.doc) *i
> assume* it should have worked but does not:
>
>    ...//initialize FileSystem fs
>    //*(1) create a new file*
>    FSDataOutputStream os = fs.create(name, true, fs.getConf().getInt(
>        "io.file.buffer.size", 4096), fs.getDefaultReplication(), fs
>        .getDefaultBlockSize(), null);
>    os.writeUTF("hello");
>    os.flush();
>    os.close(); *//closed*
>
>    //*(2) open* the file for append
>    os = fs.append(name);
>    os.writeUTF("world");
>    os.flush(); //file is *not* closed
>
>    //*(3) read existing data from the file*
>    DataInputStream dis = fs.open(name);
>    String data = dis.readUTF();
>    dis.close();
>    System.out.println("Read: " + data); //*expected "hello"*
>
>    //finally close
>    os.close();
>
> I get an exception: hdfs.DFSClient: Could not obtain block
> blk_3192362259459791054_10766 from any node... (on *Step 3:*).
> What am I missing?
>
> Thanks.
> - Manish
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message