flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ufuk Celebi <...@apache.org>
Subject Re: Flink Upgrades and Job Upgrades
Date Mon, 30 May 2016 09:40:07 GMT
2) What about the following: You trigger a savepoint and deploy the
2nd version from the savepoint. You let both job versions run at the
same time and switch the consumers of the job to the new version (e.g.
Kafka output topic v2). On the Flink side, this should be possible,
but moves the problem to whether you can switch the consumed data
(e.g. Kafka topic) w/o downtime.

On Mon, May 30, 2016 at 11:18 AM, Aljoscha Krettek <aljoscha@apache.org> wrote:
> Hi,
> hot update of a running cluster is not possible right now. And there is also
> no one working on this for the near future. We are aware that this would be
> nice to have, though.
> For 2), this is possible, but not without stopping the job. Savepoints is
> the feature that was introduced for that:
> https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming/savepoints.html
> Best,
> Aljoscha
> On Fri, 27 May 2016 at 23:35 Chris Wildman <cwildman@newrelic.com> wrote:
>> Hi,
>> I have two questions around Flink deployment. I have done some research
>> but I wanted to confirm my conclusions.
>> 1) Is it possible to do a live upgrade of a Flink cluster from one version
>> to the next without having to take down the entire cluster? (Imagining a
>> rolling deploy where we only take down one JobManager or TaskManager at a
>> time while the rest of the system continues)
>> 2) Is it possible to upgrade a streaming dataflow without having to stop
>> it? I see mention of "Versioning for DataStream programs" on the roadmap
>> here:
>> https://docs.google.com/document/d/1ExmtVpeVVT3TIhO1JoBpC5JKXm-778DAD7eqw5GANwE/edit#heading=h.qn8zyouct55x.
>> Assuming these are not available today are they on the roadmap for the
>> future? Any projected release that might have this functionality?
>> Thanks,
>> Chris

View raw message