hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zhudacai <zhuda...@hisilicon.com>
Subject issue about enable UseNUMA flag in hadoop framework
Date Sat, 19 Sep 2015 08:42:22 GMT
Hi,ALL



We get a problem about enable UseNUMA flag for my hadoop framework.

We've tried to specify JVM flags during hadoop daemon's starts,
e.g. export HADOOP_NAMENODE_OPTS="-XX:+UseNUMA -Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS",
export HADOOP_SECONDARYNAMENODE_OPTS="-XX:+UseNUMA -Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS",
etc.
But the ratio between local and remote memory access is 2:1, just remains as same as before.

Then we find that hadoop MapReduce start child JVM processes to run task in containers. So
we passes -XX:+UseNUMA to JVMs by set theting configuration parameter child.java.opts. But
hadoop starts to throw ExitCodeExceptionException (exitCode=1), seems that hadoop does not
support this JVM parameter.

What should we do to enable UseNUMA flag for my hadoop? Or what should we do to decrease the
local/remote memory access in NUMA framework? Should we just change Hadoop script or resorts
to source code? And how to do it?

The hadoop version is 2.6.0.

Best Regards.

Dacai

Mime
View raw message