hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinod KV <vino...@yahoo-inc.com>
Subject Re: job conf object
Date Fri, 11 Jun 2010 03:55:55 GMT
On Wednesday 26 May 2010 04:38 PM, Saurabh Agarwal wrote:
> Hii..
> I am toying around with hadoop configuration.
> I am trying to replace HDFS with a common nfsmount, I only have map tasks.
> so intermediate outputs need not be communicated!!!
> So I want is there a way to make the temp directory local to the nodes and
> place job conf object and jar in a nfs mount so all the nodes can access
> it..
> Saurabh Agarwal

Ideally you can do it, because MapReduce uses FileSystem APIs 
everywhere, but you may find some quirks.

OTOH, it is a very very bad idea and highly discouraged to run MapReduce 
on NFS - as soon as the number of nodes and thus tasks scales up, NFS 
will become bottlenecked and tasks/jobs will start failing with hard to 
debug failures.


View raw message