hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: resetting conf/ parameters in a life cluster.
Date Sat, 18 Aug 2012 15:06:25 GMT
Jay,

Oddly, the counters limit changes (increases, anyway) needs to be
applied at the JT, TT and *also* at the client - to take real effect.

On Sat, Aug 18, 2012 at 8:31 PM, Jay Vyas <jayunit100@gmail.com> wrote:
> Hi guys:
>
> I've reset my max counters as follows :
>
> ./hadoop-site.xml:
>  <property><name>mapreduce.job.counters.limit</name><value>15000</value></property>
>
> However, a job is failing (after reducers get to 100%!) at the very end,
> due to exceeded counter limit.  I've confirmed in my
> code that indeed the correct counter parameter is being set.
>
> My hypothesis: Somehow, the name node counters parameter is effectively
> being transferred to slaves... BUT the name node *itself* hasn't updated its
> maximum counter allowance, so it throws an exception at the end of the job,
> that is, they dying message from hadoop is
>
> " max counter limit 120 exceeded.... "
>
> I've confirmed in my job that the counter parameter is correct, when the
> job starts... However... somehow the "120 limit exceeded" exception is
> still thrown.
>
> This is in elastic map reduce, hadoop .20.205
>
> --
> Jay Vyas
> MMSB/UCHC



-- 
Harsh J

Mime
View raw message