hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: How is sharing done in HDFS ?
Date Wed, 22 May 2013 08:45:32 GMT
The job-specific files, placed by the client, are downloaded individually
by every tasktracker from the HDFS (The process is called "localization" of
the task before it starts up) and then used.


On Wed, May 22, 2013 at 1:59 PM, Agarwal, Nikhil
<Nikhil.Agarwal@netapp.com>wrote:

>  Hi,****
>
> ** **
>
> Can anyone guide me to some pointers or explain how HDFS shares the
> information put in the temporary directories (hadoop.tmp.dir,
> mapred.tmp.dir, etc.) to all other nodes? ****
>
> ** **
>
> I suppose that during execution of a MapReduce job, the JobTracker
> prepares a file called jobtoken and puts it in the temporary directories.
> which needs to be read by all TaskTrackers. So, how does HDFS share the
> contents? Does it use nfs mount or ….?****
>
> ** **
>
> Thanks & Regards,****
>
> Nikhil****
>
> ** **
>



-- 
Harsh J

Mime
View raw message