hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hai Lan <lanhai1...@gmail.com>
Subject change task jvm max heap size
Date Fri, 21 Oct 2016 10:41:48 GMT
Dear all

I'm facing a issue that I always see in syslog:

INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task
java-opts do not specify heap size. Setting task attempt jvm max heap size
to -Xmx3277m

no matter what arguments I add, it always shows the same heap size as
above. I tried like:
  conf.set("mapreduce.reduce.memory.mb", "16384");
conf.set("mapreduce.reduce.java.opts", "-Xmx16384m");
conf.set("mapred.child.java.opts", "-Xmx16384m");

OR used -D in command line.  I can see those value are set correctly in
Metadata of HUE. But I'm still got java heap size out of memory error
during running large jobs. I'm not sure if the error is related to this
task max heap size?

Many Thanks,

Best,

Hai

Mime
View raw message