hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Geoffry Roberts <geoffry.robe...@gmail.com>
Subject Re: What does MAX_FAILED_UNIQUE_FETCHES mean?
Date Mon, 27 Jul 2009 16:46:33 GMT
Thanks for the response.

Now how do I fix this?  Is the problem most likely in my MR code? or in my
hadoop configuration? or what?

On Mon, Jul 27, 2009 at 9:33 AM, Harish Mallipeddi <
harish.mallipeddi@gmail.com> wrote:

>
> On Mon, Jul 27, 2009 at 9:42 PM, Geoffry Roberts <
> geoffry.roberts@gmail.com> wrote:
>
>> All,
>>
>> I am attempting to run my first map reduce job and I am getting the
>> following error.  Does anyone know what it means?
>>
>> Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
>>
>>
> After the maps are complete, reducers need to fetch the intermediate
> map-outputs so they can reduce() them (this is part of the "shuffle" phase).
> It seems like in your case, for some reason the reducers are unable to fetch
> the map-ouputs from the corresponding TaskTracker nodes even after
> MAX_FAILED_UNIQUE_FETCHES attempts. The TaskTrackers (actually a Jetty
> webserver running on them) are responsible for serving these map-outputs.
>
> --
> Harish Mallipeddi
> http://blog.poundbang.in
>

Mime
View raw message