hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jay Vyas <jayunit...@gmail.com>
Subject resetting conf/ parameters in a life cluster.
Date Sat, 18 Aug 2012 15:01:24 GMT
Hi guys:

I've reset my max counters as follows :


However, a job is failing (after reducers get to 100%!) at the very end,
due to exceeded counter limit.  I've confirmed in my
code that indeed the correct counter parameter is being set.

My hypothesis: Somehow, the name node counters parameter is effectively
being transferred to slaves... BUT the name node *itself* hasn't updated its
maximum counter allowance, so it throws an exception at the end of the job,
that is, they dying message from hadoop is

" max counter limit 120 exceeded.... "

I've confirmed in my job that the counter parameter is correct, when the
job starts... However... somehow the "120 limit exceeded" exception is
still thrown.

This is in elastic map reduce, hadoop .20.205

Jay Vyas

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message