hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: exceptions i got in HDFS - append problem?
Date Fri, 09 Apr 2010 16:58:38 GMT
On Fri, Apr 9, 2010 at 3:07 AM, Gokulakannan M <gokulm@huawei.com> wrote:
> Hi,
>  I got the following exceptions , when I am using HDFS to write the logs
> coming from Scribe
>  1. java.io.IOException: Filesystem closed
>      <stack trace>
>      ........
>      ........
>      call to org.apache.hadoop.fs.FSDataOutputStream::write failed!

Above seems to be saying that filesystem is closed and as a
consequence, you are not able to write it.

>  2. org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
> create
>       file xxx-2010-04-01-12-40_00000 for DFSClient_1355960219 on client
> because current leaseholder is trying to recreate file
>       <stack trace>
>      ........
>      ........
>      call to
> org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apache/hadoop/fs/FSDataOutputStream;)failed!

Someone holds the lease on the file you are trying to open?

You mention scribe.  Do you have hdfs-200 and friends applied to your cluster?

>   I didn't apply the HDFS-265 to my hadoop patch yet.

What hadoop version are you running?  hdfs-265 won't apply to hadoop
0.20.x if that is what you are running.

>   Are these exceptions due to the bugs in existing append-feature?? or some
> other reason?
>  Should I need to apply the complete append patch or a simple patch will
> solve this.
I haven't looked, but my guess is that scribe documentation probably
has description of the patchset required to run on hadoop.


View raw message