hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tsuyoshi OZAWA <ozawa.tsuyo...@gmail.com>
Subject Re: ignoring map task failure
Date Wed, 20 Aug 2014 09:04:58 GMT

Please check the value of mapreduce.map.maxattempts and
mapreduce.reduce.maxattempts. If you'd like to ignore the error only
in specific jobs, it's useful to use -D option to change the
configuration as follows:

bin/hadoop jar job.jar -Dmapreduce.map.maxattempts=10

- Tsuyoshi

On Tue, Aug 19, 2014 at 2:57 AM, Susheel Kumar Gadalay
<skgadalay@gmail.com> wrote:
> Check the parameter yarn.app.mapreduce.client.max-retries.
> On 8/18/14, parnab kumar <parnab.2007@gmail.com> wrote:
>> Hi All,
>>        I am running a job where there are between 1300-1400 map tasks. Some
>> map task fails due to some error. When 4 such maps fail the job naturally
>> gets killed.  How to  ignore the failed tasks and go around executing the
>> other map tasks. I am okay with loosing some data for the failed tasks.
>> Thanks,
>> Parnab

- Tsuyoshi

View raw message