hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Problem with calling FSDataOutputStream.sycn() ~
Date Sat, 26 Jun 2010 15:59:59 GMT
The line numbers don't match those from hadoop 0.20.2
What version are you using ?

This is from Syncable interface:
  /**
   * Synchronize all buffer with the underlying devices.
   * @throws IOException
   */

If you look at src/core/org/apache/hadoop/fs/RawLocalFileSystem.java where
LocalFSFileOutputStream implements Syncable:
    public void sync() throws IOException {
      fos.getFD().sync();
    }
you would see that sync() is file level operation.

On Fri, Jun 25, 2010 at 9:27 PM, elton sky <eltonsky9404@gmail.com> wrote:

> Hello,
>
> I am trying some simple code snippet to create a new file. And after create
> and write to the file, I want to use "sync()" to synchronize all replicas.
> However, I got "LeaseExpiredException" in FSNameSystem.checkLease():
> my code:
> .
> .
> InputStream in=null;
>        OutputStream out = null;
>        try {
>            in = new BufferedInputStream(new FileInputStream(src));
>
>            FileSystem fs = FileSystem.get(URI.create(dest), conf);
>
>            System.out.println(fs.getClass().getName());
>
>            out = fs.create(new Path(dest), true);
>            assert(fs.exists(new Path(dest)) == true);
>
>
>            IOUtils.copyBytes(in, out, conf, true);
>
>            ((FSDataOutputStream)out).flush();
>
>            ((FSDataOutputStream)out).sync();*// Got Exception here*
>
>            System.out.println(dest +" is created and synced
> successfully.");
>
>            printFileInfo(new Path(dest));
>
>          } catch (IOException e) {
>            IOUtils.closeStream(out);
>            IOUtils.closeStream(in);
>            throw e;
>          }finally
>          {
>              IOUtils.closeStream(out);
>              IOUtils.closeStream(in);
>          }
> .
> .
>
> Exception in thread "main" org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /user/elton/test/file2 File is not open for writing. Holder
> DFSClient_-925213311 does not have any open
> files.
>
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1367)
>
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1334)
>
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:1857)
>
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.fsync(NameNode.java:679)
>
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>        at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>        at
> java.lang.reflect.Method.invoke(Method.java:616)
>
>        at
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>
>        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>
>        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>
>        at java.security.AccessController.doPrivileged(Native
> Method)
>
>        at
> javax.security.auth.Subject.doAs(Subject.java:416)
>
>        at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy0.fsync(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at
> java.lang.reflect.Method.invoke(Method.java:616)
>
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>
>        at $Proxy0.fsync(Unknown
> Source)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3141)
>
>        at
> org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)
> .
> .
>
> I figure the reason is I got a  INodeFile obj in checkLease(), rather than
> a
> INodeFileUnderConstruction
> .
> .
> // make sure that we still have the lease on this file.
>  private INodeFileUnderConstruction checkLease(String src, String holder)
>                                                      throws IOException {
>
>    INodeFile file = dir.getFileINode(src);
>
>    checkLease(src, holder, file);
>
>    return (INodeFileUnderConstruction)file;
>  }
> .
> .
> But how can this happen? Any idea?
>
> Elton
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message