flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lasse Nedergaard <lassenedergaardfl...@gmail.com>
Subject Re: Savepoint memory overhead
Date Thu, 30 Apr 2020 04:39:27 GMT
We using Flink 1.10 running on Mesos. 

Med venlig hilsen / Best regards
Lasse Nedergaard

> Den 30. apr. 2020 kl. 04.53 skrev Yun Tang <myasuka@live.com>:
> Hi Lasse
> Which version of Flink did you use? Before Flink-1.10, there might exist memory problem
when RocksDB executes savepoint with write batch[1].
> [1] https://issues.apache.org/jira/browse/FLINK-12785
> Best
> Yun Tang
> From: Lasse Nedergaard <lassenedergaardflink@gmail.com>
> Sent: Wednesday, April 29, 2020 21:17
> To: user <user@flink.apache.org>
> Subject: Savepoint memory overhead
> Hi.
> I would like to know if there are any guidelines/recommendations for the memory overhead
we need to calculate for when doing savepoint to s3. We use RockDb state backend.
> We run our job on relative small task managers and we can see we get memory problems
if the state size for each task manager get "big" (we haven't found the rule of thumbs yet)
and we can remove the problem if we reduce the state size, or increase parallelism and jobs
with none or small state don't have any problems.
> So I see a relation between between allocated memory to a task manager and the state
it can handle. 
> So do anyone have any recommendations/ base practices for this and can someone explain
why savepoint requires memory.
> Thanks
> In advance
> Lasse Nedergaard 

View raw message