hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manoj Babu <manoj...@gmail.com>
Subject Re: Reg Too many fetch-failures Error
Date Sat, 02 Feb 2013 02:17:39 GMT
Hi Vijay,

Thanks for the information.
Few jobs were running in the cluster at the time.

Cheers!
Manoj.


On Fri, Feb 1, 2013 at 11:22 PM, Vijay Thakorlal <vijayjtuk@hotmail.com>wrote:

> Hi Manoj,****
>
> ** **
>
> As you may be aware this means the reduces are unable to fetch
> intermediate data from TaskTrackers that ran map tasks – you can try:****
>
> * increasing tasktracker.http.threads so there are more threads to handle
> fetch requests from reduces. ****
>
> * decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches
> are performed in parallel****
>
> ** **
>
> It could also be due to a temporary DNS issue.****
>
> ** **
>
> See slide 26 of this presentation for potential causes for this message:
> http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
> ****
>
> ** **
>
> Not sure why you did not receive the problem before but was it the same
> data or different data? Did you have other jobs running on your cluster?**
> **
>
> ** **
>
> Hope that helps****
>
> ** **
>
> Regards****
>
> Vijay****
>
> ** **
>
> *From:* Manoj Babu [mailto:manoj444@gmail.com]
> *Sent:* 01 February 2013 15:09
> *To:* user@hadoop.apache.org
> *Subject:* Reg Too many fetch-failures Error****
>
> ** **
>
> Hi All,****
>
> ** **
>
> I am getting Too many fetch-failures exception.****
>
> What might be the reason for this exception, For same size of data i dint
> face this error earlier and there is change in code.****
>
> How to avoid this?****
>
> ** **
>
> Thanks in advance.****
>
> ** **
>
> Cheers!****
>
> Manoj.****
>

Mime
View raw message