hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prasanth Jayachandran <>
Subject Re: Why direct memory should be smaller than llap size
Date Tue, 14 Mar 2017 04:35:59 GMT
This essentially means you are asking for a container of size 4GB and all of that 4GB is used
for cache. There is no memory for executors.

Ideally you want to set, size = Xmx + cache (if off heap cache is used) + some headroom space.

Xmx - Xmx is the heap memory used by the executors
Cache - if direct allocation is used this will count towards off-heap memory usage
Headroom space - this is also off heap memory used for misc java related stuffs (metaspace,
threads * stack size, gc data structures etc.)

Say if you are using 4GB containers and you want 2 executors with 1GB each,

then you can try the following for off heap cache

—size 4g —Xmx 2g —cache 1536m

for on-heap cache

—size 4g —Xmx 3584m

Remaining ~500MB is left as headroom so that YARN does not kill the container when it reaches
the physical memory usage limits.

Reference usage can be found here


On Mar 13, 2017, at 9:22 PM, 邓志华 <zhihuadeng1@CREDITEASE.CN<mailto:zhihuadeng1@CREDITEASE.CN>>

when I use 'hive --service llap --name llap_service --size 4g --cache 4g' to generate the
startup script for llap,  an exception throws:

java.lang.IllegalArgumentException: Cache size (4.00GB) has to be smaller than the container
sizing (4.00GB)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(

is there any reasons why should follow this rule?

Zhihua Deng

View raw message