hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alejandro Abdelnur <t...@cloudera.com>
Subject Re: Filesystem closed exception
Date Wed, 30 Jan 2013 18:49:51 GMT

Is FS caching enabled or not in your cluster?

A simple solution would be to modify your mapper code not to close the FS.
It will go away when the task ends anyway.


On Thu, Jan 24, 2013 at 5:26 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Hi,
> We are noticing a problem where we get a filesystem closed exception when
> a map task is done and is finishing execution. By map task, I literally
> mean the MapTask class of the map reduce code. Debugging this we found that
> the mapper is getting a handle to the filesystem object and itself calling
> a close on it. Because filesystem objects are cached, I believe the
> behaviour is as expected in terms of the exception.
> I just wanted to confirm that:
> - if we do have a requirement to use a filesystem object in a mapper or
> reducer, we should either not close it ourselves
> - or (seems better to me) ask for a new version of the filesystem instance
> by setting the fs.hdfs.impl.disable.cache property to true in job
> configuration.
> Also, does anyone know if this behaviour was any different in Hadoop 0.20 ?
> For some context, this behaviour is actually seen in Oozie, which runs a
> launcher mapper for a simple java action. Hence, the java action could very
> well interact with a file system. I know this is probably better addressed
> in Oozie context, but wanted to get the map reduce view of things.
> Thanks,
> Hemanth


View raw message