hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Boesch <java...@gmail.com>
Subject Job exceeded Reduce Input limit
Date Wed, 04 Jul 2012 15:46:58 GMT
I am running a (terasort) job on a small cluster but with powerful nodes.
 The number of reducer slots was 12.  I am seeing the following message:

Job JOBID="job_201207031814_0011" FINISH_TIME="1341389866650"
FAIL_REASON="Job exceeded Reduce Input limit  Limit:  10737418240
Estimated: 102000004905" .

Now this apparently was added recently:


It looks that the solution would be to set mapreduce.reduce.input.limit to

+  <name>mapreduce.reduce.input.limit</name>
+  <value>-1</value>
+  <description>The limit on the input size of the reduce. If the estimated
+  input size of the reduce is greater than this value, job is failed. A
+  value of -1 means that there is no limit set. </description>

I did that (in mapred-site.xml). But it did not affect the behavior
i.e. the problem continues.

Any hints appreciated.


View raw message