flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tzu-Li (Gordon) Tai (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-5728) FlinkKafkaProducer should flush on checkpoint by default
Date Tue, 13 Feb 2018 06:52:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361900#comment-16361900

Tzu-Li (Gordon) Tai commented on FLINK-5728:

There was some discussion on the mailing list [1] that we do this as part of a major rework
of the Kafka / Kinesis connectors in Flink 1.6. I'll downgrade the priority and reopen this
in a Kafka / Kinesis connector rework umbrella issue.


[1] [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Timestamp-watermark-support-in-Kinesis-consumer-td20910.html]

> FlinkKafkaProducer should flush on checkpoint by default
> --------------------------------------------------------
>                 Key: FLINK-5728
>                 URL: https://issues.apache.org/jira/browse/FLINK-5728
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>            Reporter: Tzu-Li (Gordon) Tai
>            Priority: Blocker
> As discussed in FLINK-5702, it might be a good idea to let the FlinkKafkaProducer flush
on checkpoints by default. Currently, it is disabled by default.
> It's a very simple change, but we should think about whether or not we want to break
user behaviour, or have proper usage migration.

This message was sent by Atlassian JIRA

View raw message