flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Astrac <aldo.st...@gmail.com>
Subject Controlling savepoints from inside an application
Date Fri, 30 Sep 2016 13:17:42 GMT
In the project I am working on we are versioning all our flink operators in
order to be able to re-build the state from external sources (i.e. Kafka) by
bumping that version number; this works pretty nicely so far, except that we
need to be aware of wether or not we need to load the savepoint before
deploying as we need to provide different command line arguments via our
deployment script.

The thing I'd like to do instead is versioning the savepoint storage
mechanism and store the savepoints in different folders depending on the
version of our application we are running. This way when we bump the version
number we really start from scratch and we don't risk any exception due to
state deserialisation; when we don't bump the number instead we keep the
state from the previous version of the application and start from there.

To do this I would need to control the storage path of the savepoints from
within our application code but I couldn't find a way to do it; if that's
relevant we run on Yarn, keep checkpoint on the FsStateBackend and keep both
savepoints and checkpoints on HDFS. Our main class looks something like

    val flinkEnvironment =
    val stateBackend = new
    // ... define more configurations and the streaming jobs

Is there a way in this initialisation code to achieve the following?

* Configure the savepoint path while we build the StreamExecutionEnvironment
rather than in flink-conf.yml
* Manually read a savepoint rather than passing it via the CLI

Many Thanks!

View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Controlling-savepoints-from-inside-an-application-tp9273.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at Nabble.com.

View raw message