hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yang song <hadoop.ini...@gmail.com>
Subject Re: How to deal with "too many fetch failures"?
Date Wed, 19 Aug 2009 12:19:53 GMT
I'm sorry, the version is 0.19.1

2009/8/19 Ted Dunning <ted.dunning@gmail.com>

> Which version of hadoop are you running?
>
> On Tue, Aug 18, 2009 at 10:23 PM, yang song <hadoop.inifok@gmail.com>
> wrote:
>
> > Hello, all
> >    I have met the problem "too many fetch failures" when I submit a big
> > job(e.g. tasks>10000). And I know this error occurs when several reducers
> > are unable to fetch the given map output. However, I'm sure slaves can
> > contact each other.
> >    I feel puzzled and have no idea to deal with it. Maybe the network
> > transfer is bad, but how can I solve it? Increase
> > mapred.reduce.parallel.copies and mapred.reduce.copy.backoff can make
> > changes?
> >    Thank you!
> >    Inifok
> >
>
>
>
> --
> Ted Dunning, CTO
> DeepDyve
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message