hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Bond <john.r.b...@gmail.com>
Subject Overriding mapred.tasktracker.expiry.interval on a per-job basis
Date Wed, 14 Dec 2011 14:34:28 GMT

Im running a map/reduce job which dose not send progress updates
during the reduces phase so if this takes longer then 10 minutes the
task is seen to fail and restarted.  Under normall operations this
works as the reduce phase only takes a few minutes; however i am
trying to run this job with some historical data and the reduce phase
is taking longer then 10 minutes and constantly being restarted.

Obviously the correct fix is to implement a reporter[1] which has been
corrected in the dev branch and will be rolled out once it has gone
through release management 8-|.  In the mean time is there a way to
override mapred.tasktracker.expiry.interval on for a specific job
without changing mapred-site.xml and restarting the cluster.

I attempted to do the following:

`hadoop jar /path/to/jar/job.jar  class.to.run
-Dmapred.tasktracker.expiry.interval=600000000 arg1 arg2`

And in the job.conf i can see the following is set however the jobs
are still seen as failed after 10 minutes

mapred.tasktracker.expiry.interval	600000000



View raw message