ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexander Denissov (JIRA)" <>
Subject [jira] [Created] (AMBARI-13719) MAPREDUCE2 service check fails sporadically with JDK1.8
Date Wed, 04 Nov 2015 20:00:30 GMT
Alexander Denissov created AMBARI-13719:

             Summary: MAPREDUCE2 service check fails sporadically with JDK1.8
                 Key: AMBARI-13719
             Project: Ambari
          Issue Type: Bug
            Reporter: Alexander Denissov

Used PHD Ambari 2.1.2 with JDK 1.8 to create a cluster using blueprints. Cluster in AWS using
r3.large instances (~17G of RAM).

Seems that stack advisor is not used during a blueprint deployment, so the services take default
configurations. For MAPREDUCE2 the value of is set to "512" and
is set to "-Xmx410m"

Service checks are not run by default during blueprint deployment. When a service check REST
API is called, the service check fails with mapper task running out of memory:
2015-11-03 18:53:56,256 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child
: java.lang.OutOfMemoryError: Java heap space
       at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(
       at org.apache.hadoop.mapred.MapTask.createSortingCollector(

When connecting to the cluster via UI and re-running the service check, it sometimes succeeds
and sometimes fails again.

This behavior is observed when using JDK1.8 -- the same environment, blueprint and tests work
well with JDK 1.7

The behavior is observed with JDK 1.8 with the blueprint containing many services (HBase,
Hive, etc). A blueprint with just HDFS/YARN/MR2/ZK seems not to have this issue on JDK 1.8.

Manually increasing the memory parameters (or adding configurations to the blueprint) seem
to resolve the problem:

However, we should get to the root cause of why using JDK1.8 causes OOM errors on default

This message was sent by Atlassian JIRA

View raw message