hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manu Zhang <owenzhang1...@gmail.com>
Subject Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure
Date Thu, 24 Oct 2013 00:59:53 GMT
Thanks Ravi.

I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
sounds weird to me should they read configuration from those
mapred-site.xml since it's the client who applies for the resource. I have
another mapred-site.xml in the directory where I run my job. I suppose my
job should read conf from that mapred-site.xml. Please correct me if I am
mistaken.

Also, not always the same nodes. The number of failures is random, too.

Anyway, I will have my settings in all the nodes' mapred-site.xml and see
if the problem goes away.

Manu


On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ravihoo@ymail.com> wrote:

> Manu!
>
> This should not be the case. All tasks should have the configuration
> values you specified propagated to them. Are you sure your setup is
> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
> have mapred-site.xml on those nodes?
>
> HTH
> Ravi
>
>
>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
> owenzhang1990@gmail.com> wrote:
>  Hi,
>
> I've been running Terasort on Hadoop-2.0.4.
>
> Every time there is s a small number of Map failures (like 4 or 5) because
> of container's running beyond virtual memory limit.
>
> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> TaskAttempt goes fine while the values of those failed maps are the default
> 1024MB.
>
> My question is thus, why a small number of container's memory values are
> set to default rather than that of user-configured ?
>
> Any thoughts ?
>
> Thanks,
> Manu Zhang
>
>
>
>

Mime
View raw message