flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Piotr Nowojski <pi...@data-artisans.com>
Subject Re: Conceptual question
Date Thu, 07 Jun 2018 08:10:46 GMT

General solution for state/schema migration is under development and it might be released
with Flink 1.6.0.

Before that, you need to manually handle the state migration in your operator’s open method.
Lets assume that your OperatorV1 has a state field “stateV1”. Your OperatorV2 defines
field “stateV2”, which is incompatible with previous version. What you can do, is to add
a logic in open method, to check:
1. If “stateV2” is non empty, do nothing
2. If there is no “stateV2”, iterate over all of the keys and manually migrate “stateV1”
to “stateV2”

In your OperatorV3 you could drop the support for “stateV1”.

I have once implemented something like that here:


Hope that helps!


> On 6 Jun 2018, at 17:04, TechnoMage <mlatta@technomage.com> wrote:
> We are still pretty new to Flink and I have a conceptual / DevOps question.
> When a job is modified and we want to deploy the new version, what is the preferred method?
 Our jobs have a lot of keyed state.
> If we use snapshots we have old state that may no longer apply to the new pipeline.
> If we start a new job we can reprocess historical data from Kafka, but that can be very
resource heavy for a while.
> Is there an option I am missing?  Are there facilities to “patch” or “purge”
selectively the keyed state?
> Michael

View raw message