hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Libo Yu <yu_l...@hotmail.com>
Subject spilled records
Date Fri, 09 May 2014 01:17:35 GMT
Hi, 

According to ""Hadoop: the definitive guide", when mapreduce.job.shuffle.input.buffer.percent
is 
large enough, the map outputs are copied directly into the reduce JVM memory.

I set this parameter to 0.5 which is large enough to hold map outputs, but #spilled records
is still the same 
as reduce input records.  Anybody knows why? Thanks.

Libo


 		 	   		  
Mime
View raw message