hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From himanshu chandola <himanshu_cool...@yahoo.com>
Subject Re: reduce copier failed
Date Sun, 22 Nov 2009 03:11:36 GMT
I am writing to a particular partition in the nodes of my cluster. 


Is that the only reason that this could happen ? Or does anyone know of any other cases ?



Thanks.

 Morpheus: Do you believe in fate, Neo?
Neo: No.
Morpheus: Why Not?
Neo: Because I don't like the idea that I'm not in control of my life.



----- Original Message ----
From: Jason Lotz <jason.lotz@explorysmedical.com>
To: "general@hadoop.apache.org" <general@hadoop.apache.org>
Sent: Sat, November 21, 2009 8:30:28 PM
Subject: Re: reduce copier failed

Are you writing to /tmp?  Depending on your OS and configuration, you  
might be dealing with an issue where the folder does not have the  
capability of growing to the full size of the available disk space.  
This is a common default config for various Linux distros for the /tmp  
directory.

Jason

On Nov 21, 2009, at 2:05 AM, himanshu chandola wrote:

> Hi,
> I have a reduce copier that dies away everytime my 10 iterations of  
> map and reduce pass through:
> java.io.IOException: attempt_200910061342_0139_r_000001_0The reduce  
> copier failed
>    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:255)
>    at org.apache.hadoop.mapred.TaskTracker$Child.main 
> (TaskTracker.java:2210)
>
>
> Finally the job dies due to this. I thought that it could happen due  
> to low diskspace, but I've estimated the disk space required by the  
> reduce output and the current free disk space is enough to  
> accomodate this.
>
> If anyone could point out to anything it'll be great! I'm running  
> cloudera's build :Version: 0.18.3-14.
>
>
> Thanks
>
>
>
> Morpheus: Do you believe in fate, Neo?
> Neo: No.
> Morpheus: Why Not?
> Neo: Because I don't like the idea that I'm not in control of my life.
>
>
>
>


      

Mime
View raw message