hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prasanth Jayachandran <pjayachand...@hortonworks.com>
Subject Re: Why direct memory should be smaller than llap size
Date Tue, 14 Mar 2017 16:24:05 GMT
The “size” here means the container size. You are requesting for a 12GB container but max
container size in your cluster is configured to 8GB.
Xmx + cache should be less than size which should be less than max container size.

Also, set the —executors/—iothreads to a value <=max vcores.

—size 8g —Xmx 4g —cache 3200m — executors 4 —iothreads 4

This will request for 8gb container out of which 4gb is shared by 4 executor threads and 3200
mb is used for off heap cache.

Thanks
Prasanth

On Mar 14, 2017, at 1:01 AM, 邓志华 <zhihuadeng1@CREDITEASE.CN<mailto:zhihuadeng1@CREDITEASE.CN>>
wrote:

Thanks @pjayachandran,  I try to add xmx to specify the executors memory.

hive --service llap --name llap_service --instances 16  --size 12g --xmx 6g --cache 5120m
--executors 24 --iothreads 24

It’s ok now to generate the startup script while fail to start the daemon due to:

AMRMClientAsync.onError() received org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
Invalid resource request, requested memory < 0, or requested memory > max configured,
requestedMemory=12288, maxMemory=8192
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:244)
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:502)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

In this example, I want to use 6g for container running memory with 5g for off-heap data cache,
 may result to 12g memory in total. The example shows the ```size``` is taken over the ```xmx```.
If i decrease the size to 6g, the llap daemon starts with 6g heap memory,  which I see from
the gc log in nodemanager web.

the hive version: apache 2.1.1
So do i miss anything?  I get confused.

Thanks
Zhihua, Deng

在 2017年3月14日,下午12:35,Prasanth Jayachandran <pjayachandran@hortonworks.com<mailto:pjayachandran@hortonworks.com>>
写道:

This essentially means you are asking for a container of size 4GB and all of that 4GB is used
for cache. There is no memory for executors.

Ideally you want to set, size = Xmx + cache (if off heap cache is used) + some headroom space.

Xmx - Xmx is the heap memory used by the executors
Cache - if direct allocation is used this will count towards off-heap memory usage
Headroom space - this is also off heap memory used for misc java related stuffs (metaspace,
threads * stack size, gc data structures etc.)

Say if you are using 4GB containers and you want 2 executors with 1GB each,

then you can try the following for off heap cache

—size 4g —Xmx 2g —cache 1536m

for on-heap cache

—size 4g —Xmx 3584m

Remaining ~500MB is left as headroom so that YARN does not kill the container when it reaches
the physical memory usage limits.

Reference usage can be found here https://github.com/t3rmin4t0r/tez-autobuild/blob/llap/slider-gen.sh

Thanks
Prasanth

On Mar 13, 2017, at 9:22 PM, 邓志华 <zhihuadeng1@CREDITEASE.CN<mailto:zhihuadeng1@CREDITEASE.CN>>
wrote:

when I use 'hive --service llap --name llap_service --size 4g --cache 4g' to generate the
startup script for llap,  an exception throws:

java.lang.IllegalArgumentException: Cache size (4.00GB) has to be smaller than the container
sizing (4.00GB)
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:207)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:104)

is there any reasons why should follow this rule?

Thanks
Zhihua Deng





Mime
View raw message