hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prasanth Jayachandran <>
Subject Re: Why direct memory should be smaller than llap size
Date Tue, 14 Mar 2017 16:24:05 GMT
The “size” here means the container size. You are requesting for a 12GB container but max
container size in your cluster is configured to 8GB.
Xmx + cache should be less than size which should be less than max container size.

Also, set the —executors/—iothreads to a value <=max vcores.

—size 8g —Xmx 4g —cache 3200m — executors 4 —iothreads 4

This will request for 8gb container out of which 4gb is shared by 4 executor threads and 3200
mb is used for off heap cache.


On Mar 14, 2017, at 1:01 AM, 邓志华 <zhihuadeng1@CREDITEASE.CN<mailto:zhihuadeng1@CREDITEASE.CN>>

Thanks @pjayachandran,  I try to add xmx to specify the executors memory.

hive --service llap --name llap_service --instances 16  --size 12g --xmx 6g --cache 5120m
--executors 24 --iothreads 24

It’s ok now to generate the startup script while fail to start the daemon due to:

AMRMClientAsync.onError() received org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
Invalid resource request, requested memory < 0, or requested memory > max configured,
requestedMemory=12288, maxMemory=8192
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$

In this example, I want to use 6g for container running memory with 5g for off-heap data cache,
 may result to 12g memory in total. The example shows the ```size``` is taken over the ```xmx```.
If i decrease the size to 6g, the llap daemon starts with 6g heap memory,  which I see from
the gc log in nodemanager web.

the hive version: apache 2.1.1
So do i miss anything?  I get confused.

Zhihua, Deng

在 2017年3月14日,下午12:35,Prasanth Jayachandran <<>>

This essentially means you are asking for a container of size 4GB and all of that 4GB is used
for cache. There is no memory for executors.

Ideally you want to set, size = Xmx + cache (if off heap cache is used) + some headroom space.

Xmx - Xmx is the heap memory used by the executors
Cache - if direct allocation is used this will count towards off-heap memory usage
Headroom space - this is also off heap memory used for misc java related stuffs (metaspace,
threads * stack size, gc data structures etc.)

Say if you are using 4GB containers and you want 2 executors with 1GB each,

then you can try the following for off heap cache

—size 4g —Xmx 2g —cache 1536m

for on-heap cache

—size 4g —Xmx 3584m

Remaining ~500MB is left as headroom so that YARN does not kill the container when it reaches
the physical memory usage limits.

Reference usage can be found here


On Mar 13, 2017, at 9:22 PM, 邓志华 <zhihuadeng1@CREDITEASE.CN<mailto:zhihuadeng1@CREDITEASE.CN>>

when I use 'hive --service llap --name llap_service --size 4g --cache 4g' to generate the
startup script for llap,  an exception throws:

java.lang.IllegalArgumentException: Cache size (4.00GB) has to be smaller than the container
sizing (4.00GB)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(

is there any reasons why should follow this rule?

Zhihua Deng

View raw message