apex-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim <...@facility.supplies>
Subject RE: AWS EMR: Container is running beyond virtual memory limits
Date Fri, 11 Mar 2016 19:21:08 GMT
I was under the understanding that with EMR 4.x, we had to create separate config files and
place them on S3, then reference them when we create the cluster as described in detail in
this first article:

http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-release-differences.html

This next article goes into more detail.

http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-configure-apps.html

But, this is just based on reading the documentation, I have been unable to get the custom
configs working yet.

Thanks,

Jim


From: Aniruddha Thombare [mailto:aniruddha@datatorrent.com]
Sent: Friday, March 11, 2016 9:16 AM
To: users@apex.incubator.apache.org<mailto:users@apex.incubator.apache.org>
Subject: Re: AWS EMR: Container is running beyond virtual memory limits


Hi,

It seems that the above mentioned configurations didn't take effect.
Those changes were made in:
/etc/Hadoop/conf/yarn-site.xml
And
malted-site.xml

@Sandeep even pidemo didn't run.

On Fri, 11 Mar 2016 8:34 pm Pradeep A. Dalvi, <prad@apache.org<mailto:prad@apache.org>>
wrote:
We are facing following error message while starting any containers on AWS EMR.


Container [pid=8107,containerID=container_1457702160744_0001_01_000007] is running beyond
virtual memory limits. Current usage: 186.1 MB of 256 MB physical memory used; 2.0 GB of 1.3
GB virtual memory used. Killing container.

Dump of the process-tree for container_1457702160744_0001_01_000007 :

     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES)
RSSMEM_USAGE(PAGES) FULL_CMD_LINE

     |- 8222 8107 8107 8107 (java) 589 62 2041503744 46944 /usr/lib/jvm/java-openjdk/bin/java
-Xmx234881024 -Ddt.attr.APPLICATION_PATH=hdfs://ip-172-31-9-174.ec2.internal:8020/user/hadoop/datatorrent/apps/application_1457702160744_0001
-Djava.io.tmpdir=/mnt1/yarn/usercache/hadoop/appcache/application_1457702160744_0001/container_1457702160744_0001_01_000007/tmp
-Ddt.cid=container_1457702160744_0001_01_000007 -Dhadoop.root.logger=INFO,RFA -Dhadoop.log.dir=/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_000007
com.datatorrent.stram.engine.StreamingContainer

     |- 8107 8105 8107 8107 (bash) 1 5 115806208 705 /bin/bash -c /usr/lib/jvm/java-openjdk/bin/java
 -Xmx234881024  -Ddt.attr.APPLICATION_PATH=hdfs://ip-172-31-9-174.ec2.internal:8020/user/hadoop/datatorrent/apps/application_1457702160744_0001
-Djava.io.tmpdir=/mnt1/yarn/usercache/hadoop/appcache/application_1457702160744_0001/container_1457702160744_0001_01_000007/tmp
-Ddt.cid=container_1457702160744_0001_01_000007 -Dhadoop.root.logger=INFO,RFA -Dhadoop.log.dir=/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_000007
com.datatorrent.stram.engine.StreamingContainer 1>/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_000007/stdout
2>/var/log/hadoop-yarn/containers/application_1457702160744_0001/container_1457702160744_0001_01_000007/stderr



Container killed on request. Exit code is 143

Container exited with a non-zero exit code 143

We had 1 m3.xlarge MASTER & 2 m3.xlarge CORE instances provisioned. We also have tried
m4.4xlarge instances. EMR Task configurations can be found at http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/TaskConfiguration_H2.html

We tried changing following yarn configurations, however they did not seem to help much.

  <property><name>yarn.nodemanager.resource.memory-mb</name><value>12288</value></property>
  <property><name>yarn.scheduler.maximum-allocation-mb</name><value>4096</value></property>
  <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value>
</property>
  <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>50</value></property>


Thanks,
--
Pradeep A. Dalvi
Mime
View raw message