hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@yahoo-inc.com>
Subject Re: How to deal with "too many fetch failures"?
Date Wed, 19 Aug 2009 16:31:21 GMT
I'd dig around a bit more to check if it's there it's caused by a  
specific set of nodes... i.e. are maps on specific tasktrackers  
failing in this manner?

Arun

On Aug 18, 2009, at 10:23 PM, yang song wrote:

> Hello, all
>    I have met the problem "too many fetch failures" when I submit a  
> big
> job(e.g. tasks>10000). And I know this error occurs when several  
> reducers
> are unable to fetch the given map output. However, I'm sure slaves can
> contact each other.
>    I feel puzzled and have no idea to deal with it. Maybe the network
> transfer is bad, but how can I solve it? Increase
> mapred.reduce.parallel.copies and mapred.reduce.copy.backoff can make
> changes?
>    Thank you!
>    Inifok


Mime
View raw message