kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ismael Juma (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.
Date Wed, 24 May 2017 20:46:04 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16023656#comment-16023656
] 

Ismael Juma commented on KAFKA-5211:
------------------------------------

This particular JIRA doesn't need a KIP because it's indeed just restoring the behaviour in
trunk to match the behaviour from previous releases. We do need a KIP if we want to change
how we handle errors during deserialization and such.

> KafkaConsumer should not skip a corrupted record after throwing an exception.
> -----------------------------------------------------------------------------
>
>                 Key: KAFKA-5211
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5211
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Jiangjie Qin
>            Assignee: Jiangjie Qin
>              Labels: clients, consumer
>             Fix For: 0.11.0.0
>
>
> In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw an exception
and block on that corrupted record. In the latest trunk this behavior has changed to skip
the corrupted record (which is the old consumer behavior). With KIP-98, skipping corrupted
messages would be a little dangerous as the message could be a control message for a transaction.
We should fix the issue to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message