hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Krishna Rao <krishnanj...@gmail.com>
Subject [No Subject]
Date Mon, 31 Dec 2012 15:45:54 GMT
A particular query that I run fails with the following error:

***
Job 18: Map: 2  Reduce: 1   Cumulative CPU: 3.67 sec   HDFS Read: 0 HDFS
Write: 0 SUCCESS
Exception in thread "main"
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 121 max=120
 ...
***

Googling suggests that I should increase "mapreduce.job.counters.limit".
And that the number of counters a job uses
has an effect on the memory used by the JobTracker, so I shouldn't increase
this number too high.

Is there a rule of thumb for what this number should be as a function of
JobTracker memory? That is should I be cautious and
increase by 5 at a time, or could I just double it?

Cheers,

Krishna

Mime
View raw message