flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From static-max <...@git.apache.org>
Subject [GitHub] flink pull request #2579: [FLINK-4618] FlinkKafkaConsumer09 should start fro...
Date Fri, 30 Sep 2016 21:07:52 GMT
GitHub user static-max opened a pull request:


    [FLINK-4618] FlinkKafkaConsumer09 should start from the next record on startup from offsets
in Kafka

    This PR addresses https://issues.apache.org/jira/browse/FLINK-4618, which causes the last
message to be read again from Kafka after a fresh start of the job.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/static-max/flink flink-connector-kafka-0.9-fix-duplicate-messages

Alternatively you can review and apply these changes as the patch at:


To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2579
commit 0b564203cdae3b21b00bb499b85feb799136e29b
Author: static-max <max.kuklinski@live.de>
Date:   2016-09-30T19:45:38Z

    Merge pull request #1 from apache/master
    Pull from origin

commit 3618f5053e0ffb0ec1f789c56d878ed400e27056
Author: Max Kuklinski <max.kuklinski@live.de>
Date:   2016-09-30T21:03:30Z

    FLINK-4618 Incremented the commited offset by one to avoid duplicate read message.


If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message