hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manu Zhang <owenzhang1...@gmail.com>
Subject map container is assigned default memory size rather than user configured which will cause TaskAttempt failure
Date Wed, 23 Oct 2013 02:09:06 GMT

I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because
of container's running beyond virtual memory limit.

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
TaskAttempt goes fine while the values of those failed maps are the default

My question is thus, why a small number of container's memory values are
set to default rather than that of user-configured ?

Any thoughts ?

Manu Zhang

View raw message