hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Boesch <java...@gmail.com>
Subject Job exceeded Reduce Input limit
Date Wed, 04 Jul 2012 15:46:58 GMT
I am running a (terasort) job on a small cluster but with powerful nodes.
 The number of reducer slots was 12.  I am seeing the following message:

Job JOBID="job_201207031814_0011" FINISH_TIME="1341389866650"
JOB_STATUS="FAILED" FINISHED_MAPS="42" FINISHED_REDUCES="0"
FAIL_REASON="Job exceeded Reduce Input limit  Limit:  10737418240
Estimated: 102000004905" .


Now this apparently was added recently:

http://mail-archives.apache.org/mod_mbox/hadoop-common-commits/201103.mbox/%3C20110304042718.5854E23888CD@eris.apache.org%3E


It looks that the solution would be to set mapreduce.reduce.input.limit to
-1:


 <property>
+  <name>mapreduce.reduce.input.limit</name>
+  <value>-1</value>
+  <description>The limit on the input size of the reduce. If the estimated
+  input size of the reduce is greater than this value, job is failed. A
+  value of -1 means that there is no limit set. </description>
+</property>


I did that (in mapred-site.xml). But it did not affect the behavior
i.e. the problem continues.


Any hints appreciated.


thx!

Mime
View raw message