kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff Widman (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-2758) Improve Offset Commit Behavior
Date Mon, 06 Nov 2017 19:59:00 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240796#comment-16240796
] 

Jeff Widman commented on KAFKA-2758:
------------------------------------

item 1 would be significantly more useful if [KIP-211](https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets)
gets accepted. That would remove the risk off accidentally expiring a consumers offsets.

> Improve Offset Commit Behavior
> ------------------------------
>
>                 Key: KAFKA-2758
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2758
>             Project: Kafka
>          Issue Type: Improvement
>          Components: consumer
>            Reporter: Guozhang Wang
>              Labels: newbiee, reliability
>
> There are two scenarios of offset committing that we can improve:
> 1) we can filter the partitions whose committed offset is equal to the consumed offset,
meaning there is no new consumed messages from this partition and hence we do not need to
include this partition in the commit request.
> 2) we can make a commit request right after resetting to a fetch / consume position either
according to the reset policy (e.g. on consumer starting up, or handling of out of range offset,
etc), or through the {code} seek {code} so that if the consumer fails right after these event,
upon recovery it can restarts from the reset position instead of resetting again: this can
lead to, for example, data loss if we use "largest" as reset policy while there are new messages
coming to the fetching partitions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message