flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Metzger <rmetz...@apache.org>
Subject Re: flink-job-in-yarn,has max memory
Date Mon, 05 Dec 2016 11:35:21 GMT
Hi,

The TaskManager reports a total memory usage of 3 GB. That's fine, given
that you requested containers of size 4GB. Flink doesn't allocate all the
memory assigned to the container to the heap.

Are you running a batch or a streaming job?


On Tue, Nov 29, 2016 at 12:43 PM, <rimin515@sina.cn> wrote:

> Hi,
>      i have a flink job,and abt assembly to get a jar file,so i put it to
> yarn and run it,use the follow commend:
> ------------------------------------------------------------------------
> /home/www/flink-1.1.1/bin/flink run \
> -m yarn-cluster \
> -yn 1 \
> -ys 2 \
> -yjm 4096 \
> -ytm 4096 \
> --class skRecomm.SkProRecommFlink \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar
> \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar \
> --classpath file:///opt/cloudera/parcels/CDH/jars/htrace-core-3.1.0-incubating.jar
> \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/lib/guava-12.0.1.jar
> \
> /home/www/flink-mining/deploy/zx_article-7cffb87.jar
> ------------------------------------------------------------
> -----------------------
> the commend is in  supervisor on a computer(*,*,*,22),
> ----------------------------
> and  flink/conf/flink-conf.yaml,i set those pargam,
> ------------------------------------------
> fs.hdfs.hadoopconf: /etc/hadoop/conf/
> jobmanager.web.port: 8081
> parallelism.default: 1
> taskmanager.memory.preallocate: false
> taskmanager.numberOfTaskSlots: 1
> taskmanager.heap.mb: 512
> jobmanager.heap.mb: 256
> arallelism.default: 1
> jobmanager.rpc.port: 6123
> jobmanager.rpc.address: localhost
>
> ------------------------------------------
> the job is success, can find follow message in yarn monitor,
>
> flink.base.dir.path /data1/yarn/nm/usercache/work/appcache/application_
> 1472623395420_36719/container_e03_1472623395420_36719_01_000001
> fs.hdfs.hadoopconf /etc/hadoop/conf/
> jobmanager.heap.mb 256
> jobmanager.rpc.address *.*.*.79  -----(is not *.*.*.22,and taskmanager is
> *.*.*.69)
> jobmanager.rpc.port 32987
> jobmanager.web.port 0
> parallelism.default        1
> recovery.zookeeper.path.namespace application_1472623395420_36719
> taskmanager.heap.mb 512
> taskmanager.memory.preallocate false
> taskmanager.numberOfTaskSlots 1
>
> -----------------------------------------------------
> Overview
> Data Port All Slots Free Slots CPU Cores Physical Memory Free Memory Flink
> Managed Memory
> 30471            2           0            32         189 GB        2.88 GB
>          1.96 GB
> ------------------------------------------------------------
> -----------------------------------------------------------
> Memory
> JVM (Heap/Non-Heap)
> Type Committed         Initial   Maximum
> Heap 2.92 GB              3.00 GB    2.92 GB
> Non-Heap  53.4 MB              23.4 MB    130 MB
> Total  2.97 GB              3.02 GB    3.04 GB
> -----------------------------------------------------------------
> Outside JVM
> Type Count Used Capacity
> Direct 510        860 KB 860 KB
> Mapped 0          0 B          0 B
> -------------------------------------------------------------------
>
> i find in computer(*,*,*,22),the pid=345 has 2.36g memory,and the pid=345
> is the job that  from supervisor run,
>
> i really do not know why ?the job was run in yarn ,why occupy so much
> memory in computer(*.*.*.22),i just run the job in computer(*.*.*.22).
>
> thank you answer my question.
>
>

Mime
View raw message