flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From vino yang <yanghua1...@gmail.com>
Subject Re: Committing Kafka Transactions during Savepoint
Date Mon, 30 Jul 2018 15:34:33 GMT
Hi Scott,

For EXACTLY_ONCE in sink end with Kafka 0.11+ producer, The answer is YES.
There is a official documentation you can have a good knowledge of this


Thanks, vino.

2018-07-27 22:53 GMT+08:00 Scott Kidder <kidder.scott@gmail.com>:

> Thank you, Aljoscha! Are Kafka transactions committed when a running job
> has been instructed to cancel with a savepoint (e.g. `flink cancel -s
> xxxx`)? This is my primary use for savepoints. I would expect that when a
> new job is submitted with the savepoint, as in the case of an application
> upgrade, Flink withl create a new Kafka transaction and processing will be
> exactly-once.
> --Scott Kidder
> On Fri, Jul 27, 2018 at 5:09 AM Aljoscha Krettek <aljoscha@apache.org>
> wrote:
>> Hi,
>> this has been in the back of my head for a while now. I finally created a
>> Jira issue: https://issues.apache.org/jira/browse/FLINK-9983
>> In there, I also outline a better fix that will take a bit longer to
>> implement.
>> Best,
>> Aljoscha
>> On 26. Jul 2018, at 23:04, Scott Kidder <kidder.scott@gmail.com> wrote:
>> I recently began using the exactly-once processing semantic with the
>> Kafka 0.11 producer in Flink 1.4.2. It's been working great!
>> Are Kafka transactions committed when creating a Flink savepoint? How
>> does this affect the recovery behavior in Flink if, before the completion
>> of the next checkpoint, the application is restarted and restores from a
>> checkpoint taken before the savepoint? It seems like this might lead to the
>> Kafka producer writing a message multiple times with different committed
>> Kafka transactions.
>> --
>> Scott Kidder

View raw message