hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jothi Padmanabhan <joth...@yahoo-inc.com>
Subject Re: Too many fetch failures
Date Tue, 21 Jul 2009 07:35:56 GMT
This error occurs when several reducers are unable to fetch the given map
output ( attempt_200907202331_0001_m_000001_0 in your example).
I am guessing that there is a configuration issue in your setup -- the
reducers are not able to contact/transfer map outputs from the TaskTracker.
The TT log on the node where the map ran could throw some light on the
error. Could you verify if all the nodes in your cluster are able to connect
with others? You could also manually login to the reducer node and try
pulling the map output yourself and see what error you are getting.


On 7/21/09 12:33 PM, "George Pang" <p0941p@gmail.com> wrote:

> Hi users,
> I got this "Too many fetch failures" in the following error message:
> *09/07/20 23:33:39 INFO mapred.JobClient:  map 100% reduce 16%
> 09/07/20 23:46:22 INFO mapred.JobClient: Task Id :
> attempt_200907202331_0001_m_000001_0, Status : FAILED
> Too many fetch-failures
> 09/07/20 23:46:37 INFO mapred.JobClient: Job complete: job_200907202331_0001
> *Don't know why it always stops at reduce 16% then assumes.  It take a long
> time even to run a small task.
> I saw people asking the same question in previous mail list, but I don't get
> the help I need.
> Hadoop version:  0.18.3
> Ubuntu version:  8.04
> Thank you in advance!
> George

View raw message