flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <rimin...@sina.cn>
Subject 回复:JVM Non Heap Memory
Date Tue, 29 Nov 2016 14:54:42 GMT
i have the same problem,but i put the flink job into yarn.
but i put the job into yarn on the computer 22,and the job can success run,and the jobmanager
is 79 and taskmanager is 69,they three different compu345ter,
however,on computer 22,the pid=3463,which is the job that put into yarn,is have 2.3g memory,15%
of total,
 the commend is : ./flink run -m yarn-cluster -yn 1 -ys 1 -yjm 1024 -ytm 1024 ....
why in conputer 22,has occupy so much momory?the job is running computer 79 and computer 69.
What would be the possible causes of such behavior ?
Best Regards,
----- 原始邮件 -----
发件人:Daniel Santos <dsantos@cryptolab.net>
主题:JVM Non Heap Memory
日期:2016年11月29日 22点26分

Is it common to have high usage of Non-Heap in JVM ?
I am running flink in stand-alone cluster and in docker, with each 
docker bieng capped at 6G of memory.
I have been struggling to keep memory usage in check.
The non-heap increases to no end. It start with just 100MB of usage and 
after a day it reaches to 1,3GB.
Then evetually reaches to 2GB and then eventually the docker is killed 
because it has reached the memory limit.
My configuration for each flink task manager is the following :
----------- flink-conf.yaml --------------
taskmanager.heap.mb: 3072
taskmanager.numberOfTaskSlots: 8
taskmanager.memory.preallocate: false
taskmanager.network.numberOfBuffers: 12500
taskmanager.memory.off-heap: false
What would be the possible causes of such behavior ?
Best Regards,
Daniel Santos
View raw message