flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From vijayakumar palaniappan <vijayakuma...@gmail.com>
Subject Re: Incremental RocksDB checkpointing
Date Fri, 01 Dec 2017 17:22:51 GMT
I observed the job for 18 hrs, it went from 118kb to 1.10MB.

I am using version 1.3.0 flink

On Fri, Dec 1, 2017 at 11:39 AM, Stefan Richter <s.richter@data-artisans.com
> wrote:

> Maybe one more question: is the size always increasing, or will it also
> reduce eventually? Over what period of time did you observe growth? From
> the way how RocksDB works, it does persist updates in a way that is
> sometimes closer to a log than in-place updates. So it is perfectly
> possible that you can observe a growing state for some time. Eventually, if
> the state reaches a critical mass, RocksDB will consolidate and prune the
> written state and that is the time when you should also observe a drop in
> size.
>
> From what it seems, you use case is working with a very small state, so if
> this is not just a test you should reconsider if this is the right use-case
> for a) incremental checkpoints and b) RocksDB at all.
>
> > Am 01.12.2017 um 16:34 schrieb vijayakumar palaniappan <
> vijayakumarpl@gmail.com>:
> >
> > I have simple event time window aggregate count function with
> incremental checkpointing enabled. The checkpoint size keeps increasing
> over a period of time, even though my input data has a single key and data
> is flowing at a constant rate.
> >
> > When i turn off incremental checkpointing, checkpoint size remains
> constant?
> >
> > Is there are any switches i need to enable or is this a bug?
> >
> > --
> > Thanks,
> > -Vijay
>
>


-- 
Thanks,
-Vijay

Mime
View raw message