hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prasanth Jayachandran <pjayachand...@hortonworks.com>
Subject Re: Why direct memory should be smaller than llap size
Date Tue, 14 Mar 2017 04:35:59 GMT
This essentially means you are asking for a container of size 4GB and all of that 4GB is used
for cache. There is no memory for executors.

Ideally you want to set, size = Xmx + cache (if off heap cache is used) + some headroom space.

Xmx - Xmx is the heap memory used by the executors
Cache - if direct allocation is used this will count towards off-heap memory usage
Headroom space - this is also off heap memory used for misc java related stuffs (metaspace,
threads * stack size, gc data structures etc.)

Say if you are using 4GB containers and you want 2 executors with 1GB each,

then you can try the following for off heap cache

—size 4g —Xmx 2g —cache 1536m

for on-heap cache

—size 4g —Xmx 3584m

Remaining ~500MB is left as headroom so that YARN does not kill the container when it reaches
the physical memory usage limits.

Reference usage can be found here https://github.com/t3rmin4t0r/tez-autobuild/blob/llap/slider-gen.sh

Thanks
Prasanth

On Mar 13, 2017, at 9:22 PM, 邓志华 <zhihuadeng1@CREDITEASE.CN<mailto:zhihuadeng1@CREDITEASE.CN>>
wrote:

when I use 'hive --service llap --name llap_service --size 4g --cache 4g' to generate the
startup script for llap,  an exception throws:

java.lang.IllegalArgumentException: Cache size (4.00GB) has to be smaller than the container
sizing (4.00GB)
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:207)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:104)

is there any reasons why should follow this rule?

Thanks
Zhihua Deng



Mime
View raw message