hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "S.L" <simpleliving...@gmail.com>
Subject Re: Unable to change the virtual memory to be more than the default 2.1 GB
Date Sun, 05 Jan 2014 17:37:12 GMT
Hi German, Thanks for your reply!

a) Yes setting the property yarn.nodemanager.vmem-check-enabled to false seems
to have avoid the problem.

b) I woud want to set the pmem/vmem ratio to a higher value and keep the
virtual memory with in certain limits but , changing this value is not
having any effect on the Hadoop2.2 YARN .

c) Why would virtual memory increase and the physical memory stay the same
, what might be the causes that would make this happen in YARN ?

Thanks.


On Thu, Jan 2, 2014 at 11:18 AM, German Florez-Larrahondo <
german.fl@samsung.com> wrote:

> A few things you can try
>
>
>
> a)      If you don’t care about virtual memory controls at all you can
> bypass it by doing the following change in the XML and restarting YARN.
>  Only you know if this is OK for the application you are trying (IMO the
> virtual memory being used is huge!)
>
>     <property>
>
>         <name>yarn.nodemanager.vmem-check-enabled</name>
>
>         <value>false</value>
>
>     </property>
>
> b)      If you still want to control the pmem/vmem, do you restart YARN
> after doing the chage in the XML file?
>
>
>
>
>
> Regards./g
>
>
>
> *From:* S.L [mailto:simpleliving016@gmail.com]
> *Sent:* Wednesday, January 01, 2014 9:51 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to change the virtual memory to be more than the
> default 2.1 GB
>
>
>
> Hello Folks,
>
> I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
> RAM.
>
> Whenever I submit a job I get an error that says that the that the virtual
> memory usage exceeded , like below.
>
> I have changed the ratio
>
>  in yarn-site.xml to 10 , however the virtual memory is not getting
> increased more than 2.1 GB , as can been seen in the error message below
> and the container is being killed.
>
> Can some one please let me know if there is any other setting that needs
> to be changed ? Thanks in advance!
>
> *Error Message :*
>
> INFO mapreduce.Job: Task Id : attempt_1388632710048_0009_m_000000_2,
> Status : FAILED
> Container [pid=12013,containerID=container_1388632710048_0009_01_000004]
> is running beyond virtual memory limits. Current usage: 544.9 MB of 1 GB
> physical memory used; 14.5 GB of 2.1 GB virtual memory used. Killing
> container.
> Dump of the process-tree for container_1388632710048_0009_01_000004 :
>     |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
>     |- 12077 12018 12013 12013 (phantomjs) 16 2 1641000960 6728
> /usr/local/bin/phantomjs --webdriver=15358
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12013 882 12013 12013 (bash) 1 0 108650496 305 /bin/bash -c
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN  -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
> 1>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stdout
> 2>/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004/stderr
>
>     |- 12075 12018 12013 12013 (phantomjs) 17 1 1615687680 6539
> /usr/local/bin/phantomjs --webdriver=29062
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12074 12018 12013 12013 (phantomjs) 16 2 1641000960 6727
> /usr/local/bin/phantomjs --webdriver=5958
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12073 12018 12013 12013 (phantomjs) 17 2 1641000960 6732
> /usr/local/bin/phantomjs --webdriver=31836
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12090 12018 12013 12013 (phantomjs) 16 2 1615687680 6538
> /usr/local/bin/phantomjs --webdriver=24519
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12072 12018 12013 12013 (phantomjs) 16 1 1641000960 6216
> /usr/local/bin/phantomjs --webdriver=10175
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12091 12018 12013 12013 (phantomjs) 17 1 1615687680 6036
> /usr/local/bin/phantomjs --webdriver=5043
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12018 12013 12013 12013 (java) 996 41 820924416 79595
> /usr/java/jdk1.7.0_25/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.container.log.dir=/home/general/hadoop-2.2.0/logs/userlogs/application_1388632710048_0009/container_1388632710048_0009_01_000004
> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 127.0.0.1 56498
> attempt_1388632710048_0009_m_000000_2 4
>     |- 12078 12018 12013 12013 (phantomjs) 16 3 1615687680 6545
> /usr/local/bin/phantomjs --webdriver=12650
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>     |- 12079 12018 12013 12013 (phantomjs) 17 2 1642020864 7542
> /usr/local/bin/phantomjs --webdriver=18444
> --webdriver-logfile=/tmp/hadoop-general/nm-local-dir/usercache/general/appcache/application_1388632710048_0009/container_1388632710048_0009_01_000004/phantomjsdriver.log
>
>
> Container killed on request. Exit code is 143
>

Mime
View raw message