hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kai Voigt...@123.org>
Subject Re: Shuffle phase replication factor
Date Tue, 21 May 2013 18:58:56 GMT
The map output doesn't get written to HDFS. The map task writes its output to its local disk,
the reduce tasks will pull the data through HTTP for further processing.

Am 21.05.2013 um 19:57 schrieb John Lilley <john.lilley@redpoint.net>:

> When MapReduce enters “shuffle” to partition the tuples, I am assuming that it writes
intermediate data to HDFS.  What replication factor is used for those temporary files?
> john
>  

-- 
Kai Voigt
k@123.org





Mime
View raw message