hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manish Katyal" <manish.kat...@gmail.com>
Subject HDFS read/write question
Date Tue, 13 Jan 2009 17:59:12 GMT
I'm trying out the new append feature  (
https://issues.apache.org/jira/browse/HADOOP-1700).
[Hadoop 0.19, distributed mode with a single data node]

The following scenario as per the JIRA documentation (Appends.doc) *i
assume* it should have worked but does not:

    ...//initialize FileSystem fs
    //*(1) create a new file*
    FSDataOutputStream os = fs.create(name, true, fs.getConf().getInt(
        "io.file.buffer.size", 4096), fs.getDefaultReplication(), fs
        .getDefaultBlockSize(), null);
    os.writeUTF("hello");
    os.flush();
    os.close(); *//closed*

    //*(2) open* the file for append
    os = fs.append(name);
    os.writeUTF("world");
    os.flush(); //file is *not* closed

    //*(3) read existing data from the file*
    DataInputStream dis = fs.open(name);
    String data = dis.readUTF();
    dis.close();
    System.out.println("Read: " + data); //*expected "hello"*

    //finally close
    os.close();

I get an exception: hdfs.DFSClient: Could not obtain block
blk_3192362259459791054_10766 from any node... (on *Step 3:*).
What am I missing?

Thanks.
- Manish

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message