flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Piotr Nowojski <pi...@data-artisans.com>
Subject Re: Between Checkpoints in Kafka 11
Date Mon, 24 Sep 2018 13:38:15 GMT
Hi,

I have nothing more to add. You (Dawid) and Vino explained it correctly :)

Piotrek

> On 24 Sep 2018, at 15:16, Dawid Wysakowicz <dwysakowicz@apache.org> wrote:
> 
> Hi Harshvardhan,
> 
> Flink won't buffer all the events between checkpoints. Flink uses Kafka's transaction,
which are committed only on checkpoints, so the data will be persisted on the Kafka's side,
but only available to read once committed.
> I've cced Piotr, who implemented the Kafka 0.11 connector in case he wants to correct
me or add something to the answer.
> 
> Best,
> 
> Dawid
> 
> On 23/09/18 17:48, Harshvardhan Agrawal wrote:
>> Hi,
>> 
>> Can someone please help me understand how does the exactly once semantic work with
Kafka 11 in Flink?
>> 
>> Thanks,
>> Harsh
>> 
>> On Tue, Sep 11, 2018 at 10:54 AM Harshvardhan Agrawal <harshvardhan.agr93@gmail.com
<mailto:harshvardhan.agr93@gmail.com>> wrote:
>> Hi,
>> 
>> I was going through the blog post on how TwoPhaseCommitSink function works with Kafka
11. One of the things I don’t understand is: What is the behavior of the Kafka 11 Producer
between two checkpoints? Say that the time interval between two checkpoints is set to 15 minutes.
Will Flink buffer all records in memory in that case and start writing to Kafka when the next
checkpoint starts?
>> 
>> Thanks!
>> -- 
>> Regards,
>> Harshvardhan
>> 
>> 
>> -- 
>> Regards,
>> Harshvardhan Agrawal
>> 267.991.6618 | LinkedIn <https://www.linkedin.com/in/harshvardhanagr/>


Mime
View raw message