Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E1C2B200CCF for ; Mon, 24 Jul 2017 14:03:08 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id DFEF9164FF4; Mon, 24 Jul 2017 12:03:08 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 32709164FF2 for ; Mon, 24 Jul 2017 14:03:08 +0200 (CEST) Received: (qmail 60205 invoked by uid 500); 24 Jul 2017 12:03:07 -0000 Mailing-List: contact jira-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@kafka.apache.org Delivered-To: mailing list jira@kafka.apache.org Received: (qmail 60194 invoked by uid 99); 24 Jul 2017 12:03:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 24 Jul 2017 12:03:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id D516D1A0812 for ; Mon, 24 Jul 2017 12:03:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id VWbyJRlf_OJr for ; Mon, 24 Jul 2017 12:03:06 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id D74255FCC6 for ; Mon, 24 Jul 2017 12:03:05 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id DFEDCE06CF for ; Mon, 24 Jul 2017 12:03:04 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 90C6621EE4 for ; Mon, 24 Jul 2017 12:03:02 +0000 (UTC) Date: Mon, 24 Jul 2017 12:03:02 +0000 (UTC) From: "Ismael Juma (JIRA)" To: jira@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (KAFKA-5630) Consumer poll loop over the same record after a CorruptRecordException MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 24 Jul 2017 12:03:09 -0000 [ https://issues.apache.org/jira/browse/KAFKA-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16098253#comment-16098253 ] Ismael Juma commented on KAFKA-5630: ------------------------------------ [~vmaurin_glispa], the consumer behaviour is as expected. The application should decide whether it wants to skip the bad record (via `seek`) or not. However, we should figure out if the corruption is due to a bug in Kafka. And fix it, if that's the case. > Consumer poll loop over the same record after a CorruptRecordException > ---------------------------------------------------------------------- > > Key: KAFKA-5630 > URL: https://issues.apache.org/jira/browse/KAFKA-5630 > Project: Kafka > Issue Type: Bug > Components: consumer > Affects Versions: 0.11.0.0 > Reporter: Vincent Maurin > > Hello > While consuming a topic with log compaction enabled, I am getting an infinite consumption loop of the same record, i.e, each call to poll is returning to me 500 times one record (500 is my max.poll.records). I am using the java client 0.11.0.0. > Running the code with the debugger, the initial problem come from `Fetcher.PartitionRecords,fetchRecords()`. > Here I get a `org.apache.kafka.common.errors.CorruptRecordException: Record size is less than the minimum record overhead (14)` > Then the boolean `hasExceptionInLastFetch` is set to true, resulting the test block in `Fetcher.PartitionRecords.nextFetchedRecord()` to always return the last record. > I guess the corruption problem is similar too https://issues.apache.org/jira/browse/KAFKA-5582 but this behavior of the client is probably not the expected one -- This message was sent by Atlassian JIRA (v6.4.14#64029)