kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException
Date Tue, 01 Aug 2017 16:00:04 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16109156#comment-16109156

ASF GitHub Bot commented on KAFKA-5630:

Github user asfgit closed the pull request at:


> Consumer poll loop over the same record after a CorruptRecordException
> ----------------------------------------------------------------------
>                 Key: KAFKA-5630
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5630
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions:
>            Reporter: Vincent Maurin
>            Assignee: Jiangjie Qin
>            Priority: Critical
>              Labels: regression, reliability
>             Fix For:
> Hello
> While consuming a topic with log compaction enabled, I am getting an infinite consumption
loop of the same record, i.e, each call to poll is returning to me 500 times one record (500
is my max.poll.records). I am using the java client
> Running the code with the debugger, the initial problem come from `Fetcher.PartitionRecords,fetchRecords()`.
> Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record size is less
than the minimum record overhead (14)`
> Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test block in
`Fetcher.PartitionRecords.nextFetchedRecord()` to always return the last record.
> I guess the corruption problem is similar too https://issues.apache.org/jira/browse/KAFKA-5582
but this behavior of the client is probably not the expected one

This message was sent by Atlassian JIRA

View raw message