hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mayuran Yogarajah <mayuran.yogara...@casalemedia.com>
Subject Re: What can cause: Map output copy failure
Date Fri, 08 Jan 2010 18:59:29 GMT
Amogh Vasekar wrote:
> Hi,
> Can you please let us know your system configuration running hadoop?
> The error you see is when the reducer is copying its respective map output into memory.
The parameter mapred.job.shuffle.input.buffer.percent  can be manipulated for this ( a bunch
of others will also help you optimize sort later ), but I would say 200M is far too less memory
allocated for hadoop application jvms :)
>
> Amogh
>
>   
Hi Amogh,

We're using a 3 node cluster, all are quad cores (intel x3220) with 4 
gigs of ram.
They are running Centos 5.3, and Hadoop 0.18.3.

After looking at the source code I (possibly mistakenly) thought that 
fs.inmemory.size.mb might
have something to do with this.  I had bumped it up to 200 (default is 
75), but the heap was left
at 200M.  I think when I configured the cluster initially I had 
mistakenly thought that 200M for
heap was enough, but it wasn't.

I was able to make the error go away by:
1) increasing mapred.child.java.opts
2) decreasing fs.inmemory.size.mb

Do you know of any other parameters that I should be tweaking ?

thanks,
M

Mime
View raw message