hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sheng Guo <enigma...@gmail.com>
Subject A question about time queue limit.
Date Tue, 03 Jul 2012 21:36:17 GMT

I run a job which has only one reducer, and through the job tracker page, I
saw during the reducer step, it takes a long time (which I think it is
normal). but then I saw it has a failure in reducer step when it comes 99%
as below:

Task attempt_201207021917_10357_r_000000_0 is over the queue time
limit of 900 seconds. Killing!

then the reducer automatically restarted, and the second round it succeed.

I checked the configuration xml file, it has this:
*mapred.queue.default.task-time-limit    *900

I guess that is the reason for the first time failure, but I don't
understand why the second time it succeeded? Does it automatically adjusted
this queue task time limit?



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message