hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: How to deal with "too many fetch failures"?
Date Wed, 19 Aug 2009 07:44:23 GMT
Which version of hadoop are you running?

On Tue, Aug 18, 2009 at 10:23 PM, yang song <hadoop.inifok@gmail.com> wrote:

> Hello, all
>    I have met the problem "too many fetch failures" when I submit a big
> job(e.g. tasks>10000). And I know this error occurs when several reducers
> are unable to fetch the given map output. However, I'm sure slaves can
> contact each other.
>    I feel puzzled and have no idea to deal with it. Maybe the network
> transfer is bad, but how can I solve it? Increase
> mapred.reduce.parallel.copies and mapred.reduce.copy.backoff can make
> changes?
>    Thank you!
>    Inifok
>



-- 
Ted Dunning, CTO
DeepDyve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message