hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: How to deal with "too many fetch failures"?
Date Thu, 20 Aug 2009 06:25:55 GMT
I think that the problem that I am remembering was due to poor recovery from
this problem.  The underlying fault is likely due to poor connectivity
between your machines.  Test that all members of your cluster can access all
others on all ports used by hadoop.

See here for hints: http://markmail.org/message/lgafou6d434n2dvx

On Wed, Aug 19, 2009 at 10:39 PM, yang song <hadoop.inifok@gmail.com> wrote:

>    Thank you Ted. Update current cluster is a huge work, we don't want to
> do so. Could you tell me how 0.19.1 causes certain failures in detail?
>    Thanks again.
>
> 2009/8/20 Ted Dunning <ted.dunning@gmail.com>
>
> > I think I remember something about 19.1 in which certain failures would
> > cause this.  Consider using an updated 19 or moving to 20 as well.
> >
> > On Wed, Aug 19, 2009 at 5:19 AM, yang song <hadoop.inifok@gmail.com>
> > wrote:
> >
> > > I'm sorry, the version is 0.19.1
> > >
> > >
> >
>



-- 
Ted Dunning, CTO
DeepDyve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message