hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yao, York" <york....@here.com>
Subject Yarn container out of memory when using large memory mapped file
Date Sat, 04 Apr 2015 22:36:14 GMT

I am using hadoop 2.4. The reducer use several large memory mapped files (about 8G total).
The reducer itself use very little memory. To my knowledge, the memeory mapped file (FileChannel.map(readonly))
also use little memory (managed by OS instead of JVM).

I got error similar to this: Container [pid=26783,containerID=container_1389136889967_0009_01_000002]
is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used;
5.2 GB of 8.4 GB virtual memory used. Killing container

Here was my settings:



So I adjust the parameter to this and works:



I further adjust the parameters and get it work like this:



My question is: why I need the yarn container to have about 8G more memory than the JVM size?
The culprit seems to be the large java memory mapped files I used (each about 1.5G, sum up
to about 8G). Isn't the memory mapped files managed by the OS and they supposed to be sharable
by multiple processes (e.g. reducers)?



View raw message