hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 麦树荣 <shurong....@qunar.com>
Subject 答复: Yarn container out of memory when using large memory mapped file
Date Tue, 07 Apr 2015 04:01:22 GMT
mapreduce.reduce.memory.mb  means physical memory, not JVM heap.
The large mapped files (about 8G total) is more than 4G(mapreduce.reduce.memory.mb=4096),
so you got the error.

发件人: Yao, York [mailto:york.yao@here.com]
发送时间: 2015年4月5日 6:36
收件人: user@hadoop.apache.org
主题: Yarn container out of memory when using large memory mapped file


I am using hadoop 2.4. The reducer use several large memory mapped files (about 8G total).
The reducer itself use very little memory. To my knowledge, the memeory mapped file (FileChannel.map(readonly))
also use little memory (managed by OS instead of JVM).

I got error similar to this: Container [pid=26783,containerID=container_1389136889967_0009_01_000002]
is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used;
5.2 GB of 8.4 GB virtual memory used. Killing container

Here was my settings:



So I adjust the parameter to this and works:



I further adjust the parameters and get it work like this:



My question is: why I need the yarn container to have about 8G more memory than the JVM size?
The culprit seems to be the large java memory mapped files I used (each about 1.5G, sum up
to about 8G). Isn't the memory mapped files managed by the OS and they supposed to be sharable
by multiple processes (e.g. reducers)?



secteam@qunar.com❮secteam@qunar.com❯ 举报。
View raw message