flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Igor Berman <igor.ber...@gmail.com>
Subject hot deployment of stream processing(event at a time) jobs
Date Thu, 19 May 2016 06:18:06 GMT
I have simple job that consumes events from Kafka topic and process events
with filter/flat-map only(i.e. no aggregation, no windows, no private state)

The most important constraint in my setup is to continue processing no
matter what(i.e. stopping for few seconds to cancel job and restart it with
new version is not an option because it will take few seconds)

I was thinking about Blue/Green deployment concept and will start new job
with new fat-jar while old job still running and then eventually cancel old

How Flink will handle such scenario? What will happen regarding semantics
of event processing in transition time?

I know that core Kafka has consumer re-balancing mechanism, but I'm not too
familiar with it

any thought will be highly appreciated

thanks in advance

View raw message