giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arjun Sharma <as469...@gmail.com>
Subject Re: Setting Max Counters
Date Thu, 22 Oct 2015 16:51:33 GMT
Thanks Thomas and Ravikant for your replies. I set both parameters, but it
did not work for me. I am using YARN and Cloudera. Could be something that
is related to that. The way I got it to work is to disable superstep
counters from Giraph command line.

On Tue, Oct 20, 2015 at 7:14 AM, Ravikant Dindokar <ravikant.iisc@gmail.com>
wrote:

> Hi Arjun,
>
> I also faced same issue and  setting  "mapreduce.job.counters.max" in
> mapreduce-site.xml to a value above 120, worked for me.
>
> Thanks
> Ravikant
>
> <http://mapredit.blogspot.gr/2012/12/hive-query-error-too-many-counters.html>
>
> On Wed, Oct 14, 2015 at 1:56 PM, Thomas Karampelas <tkaramp@di.uoa.gr>
> wrote:
>
>> Hi,
>>
>> Did you set mapreduce.job.counters.limit on all the machines of your
>> cluster?
>>
>> Thomas
>>
>>
>> On 02/10/2015 11:13 μμ, Arjun Sharma wrote:
>>
>>> Hi,
>>>
>>> I am trying to run a job which requires more than a 100 iterations to
>>> terminate (converge). However, Giraph always exists with an error because
>>> of the limit on the number of counters which is 120 (actual exception and
>>> stack trace below). I tried setting both mapreduce.job.counters.limit and
>>> mapreduce.job.counters.max from the job's command line, but that did not
>>> help. Any suggestions on how to resolve this?
>>>
>>> Thanks,
>>> Arjun.
>>>
>>> org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
>>> counters: 121 max=120
>>>         at
>>> org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:101)
>>>         at
>>> org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:108)
>>>         at
>>> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:78)
>>>         at
>>> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:95)
>>>         at
>>> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounterImpl(AbstractCounterGroup.java:123)
>>>         at
>>> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:113)
>>>         at
>>> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:130)
>>>         at
>>> org.apache.hadoop.mapred.Counters$Group.findCounter(Counters.java:369)
>>>         at
>>> org.apache.hadoop.mapred.Counters$Group.getCounterForName(Counters.java:314)
>>>         at
>>> org.apache.hadoop.mapred.Counters.findCounter(Counters.java:479)
>>>         at
>>> org.apache.hadoop.mapred.Task$TaskReporter.getCounter(Task.java:666)
>>>         at
>>> org.apache.hadoop.mapred.Task$TaskReporter.getCounter(Task.java:609)
>>>         at
>>> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.getCounter(TaskAttemptContextImpl.java:76)
>>>         at
>>> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.getCounter(WrappedMapper.java:101)
>>>         at
>>> org.apache.giraph.counters.HadoopCountersBase.getCounter(HadoopCountersBase.java:60)
>>>         at
>>> org.apache.giraph.counters.GiraphTimers.getSuperstepMs(GiraphTimers.java:125)
>>>         at
>>> org.apache.giraph.master.MasterThread.run(MasterThread.java:140)
>>>
>>
>>
>

Mime
View raw message