hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravi...@ymail.com>
Subject Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure
Date Wed, 23 Oct 2013 17:40:48 GMT
Manu!

This should not be the case. All tasks should have the configuration values you specified
propagated to them. Are you sure your setup is correct? Are they always the same nodes which
run with 1024Mb? Perhaps you have mapred-site.xml on those nodes?

HTH
Ravi




On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <owenzhang1990@gmail.com> wrote:
 
Hi, 
I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because of container's
running beyond virtual memory limit. 

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most TaskAttempt goes fine
while the values of those failed maps are the default 1024MB.

My question is thus, why a small number of container's memory values are set to default rather
than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang
Mime
View raw message