hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Fixing a failed reduce task
Date Wed, 14 Jul 2010 02:40:36 GMT
Feel free to comment on https://issues.apache.org/jira/browse/MAPREDUCE-1928

On Tue, Jul 13, 2010 at 6:57 PM, Steve Lewis <lordjoe2000@gmail.com> wrote:

> Yes - of course but the question is whether there is a way to do it while
> the job is running rather than
> restarting with different parameter
>
>
> On Tue, Jul 13, 2010 at 4:51 PM, Ted Yu <yuzhihong@gmail.com> wrote:
>
>> A general solution for OOME is to reduce the size of input to (reduce)
>> task so that each (reduce) task consumes less memory.
>>
>>
>> On Tue, Jul 13, 2010 at 10:16 AM, Steve Lewis <lordjoe2000@gmail.com>wrote:
>>
>>> I am running a map reduce ob where a few reduce tasks fail with an out of
>>> memory error -
>>> Increasing the memory is not an option. However if a retry had
>>> information that an earlier attempt
>>> failed out of memory and especially it it had access to a few numbers
>>> describing how far the earlier attempt
>>> managed to get, it could defend against the error
>>> I have seen little information about how a retried task might access the
>>> error logs or other information
>>> from previous attempts - is there such a mechanism???
>>>
>>>
>>> --
>>> Steven M. Lewis PhD
>>> Institute for Systems Biology
>>> Seattle WA
>>>
>>
>>
>
>
> --
> Steven M. Lewis PhD
> Institute for Systems Biology
> Seattle WA
>

Mime
View raw message