flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Xintong Song <tonysong...@gmail.com>
Subject Re: Question about the flink 1.6 memory config
Date Tue, 31 Mar 2020 10:33:13 GMT
The container cut-off accounts for not only metaspace, but also native
memory footprint such as thread stack, code cache, compressed class space.
If you run streaming jobs with rocksdb state backend, it also accounts for
the rocksdb memory usage.

The consequence of less cut-off depends on your environment and workloads.
For standalone clusters, the cut-off will not take any effect. For
containerized environments, depending on Yarn/Mesos configurations your
container may or may not get killed due to exceeding the container memory.

Thank you~

Xintong Song

On Tue, Mar 31, 2020 at 5:34 PM LakeShen <shenleifighting@gmail.com> wrote:

> Hi community,
> Now I am optimizing the flink 1.6 task memory configuration. I see the
> source code, at first, the flink task config the cut-off memory, cut-off
> memory = Math.max(600,containerized.heap-cutoff-ratio  * TaskManager
> Memory), containerized.heap-cutoff-ratio default value is 0.25. For
> example, if TaskManager Memory is 4G, cut-off memory is 1 G.
> However, I set the taskmanager's gc.log, I find the  metaspace only used
> 60 MB. I personally feel that the memory configuration of cut-off is a
> little too large. Can this cut-off memory configuration be reduced, like
> making the containerized.heap-cutoff-ratio be 0.15.
> Is there any problem for this config?
> I am looking forward to your reply.
> Best wishes,
> LakeShen

View raw message