From dev-return-96628-archive-asf-public=cust-asf.ponee.io@kafka.apache.org Wed Aug 1 19:31:09 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 1F068180634 for ; Wed, 1 Aug 2018 19:31:08 +0200 (CEST) Received: (qmail 54302 invoked by uid 500); 1 Aug 2018 17:31:03 -0000 Mailing-List: contact dev-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list dev@kafka.apache.org Received: (qmail 54291 invoked by uid 99); 1 Aug 2018 17:31:02 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Aug 2018 17:31:02 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 912C5C8772 for ; Wed, 1 Aug 2018 17:31:02 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.301 X-Spam-Level: X-Spam-Status: No, score=-110.301 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id uzEBo7sQUXKV for ; Wed, 1 Aug 2018 17:31:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 675A15F30C for ; Wed, 1 Aug 2018 17:31:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id C5EBEE2633 for ; Wed, 1 Aug 2018 17:31:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 2F1BA27764 for ; Wed, 1 Aug 2018 17:31:00 +0000 (UTC) Date: Wed, 1 Aug 2018 17:31:00 +0000 (UTC) From: "Julien Fabre (JIRA)" To: dev@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (KAFKA-7230) Empty Record created when producer failed due to RecordTooLargeException MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Julien Fabre created KAFKA-7230: ----------------------------------- Summary: Empty Record created when producer failed due to Reco= rdTooLargeException Key: KAFKA-7230 URL: https://issues.apache.org/jira/browse/KAFKA-7230 Project: Kafka Issue Type: Bug Affects Versions: 1.1.0 Reporter: Julien Fabre When a producer try to produce a RecordBatch which is bigger than the messa= ge.max.bytes value, it fails with the error=C2=A0 {code:java}org.apache.kafka.common.errors.RecordTooLargeException{code} but an empty Record gets created. While hitting the RecordTooLargeException is expected, I was not expecting = seeing a new offset with an empty Record in the Topic. Is that a problem with Kafka or should the consumer handle this case ? Test setup : - Kafka 2.11-1.1.0 - The producer is written in Go, using a [SyncProducer|https://godoc.org/gi= thub.com/Shopify/sarama#SyncProducer] from the Sarama library. - The consumer is kafkacat version 1.3.1-13-ga6b599 Debugs logs from Kafka : {code} [2018-08-01 17:21:11,201] DEBUG Accepted connection from /172.17.0.1:33718 = on /172.17.0.3:9092 and assigned it to processor 1, sendBufferSize [actual|= requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|1024= 00] (kafka.network.Acceptor) [2018-08-01 17:21:11,201] DEBUG Processor 1 listening to new connection fro= m /172.17.0.1:33718 (kafka.network.Processor) [2018-08-01 17:21:11,203] DEBUG [ReplicaManager broker=3D1001] Request key = events-0 unblocked 0 fetch requests. (kafka.server.ReplicaManager) [2018-08-01 17:21:11,203] DEBUG [Partition events-0 broker=3D1001] High wat= ermark updated to 2 [0 : 136] (kafka.cluster.Partition) [2018-08-01 17:21:11,203] DEBUG Sessionless fetch context returning 1 parti= tion(s) (kafka.server.SessionlessFetchContext) [2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=3D1001] Request key = events-0 unblocked 1 fetch requests. (kafka.server.ReplicaManager) [2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=3D1001] Request key = events-0 unblocked 0 producer requests. (kafka.server.ReplicaManager) [2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=3D1001] Request key = events-0 unblocked 0 DeleteRecordsRequest. (kafka.server.ReplicaManager) [2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=3D1001] Produce to l= ocal log in 2 ms (kafka.server.ReplicaManager) [2018-08-01 17:21:11,205] DEBUG Created a new full FetchContext with 1 part= ition(s). Will not try to create a new session. (kafka.server.FetchManager) [2018-08-01 17:21:11,210] DEBUG [ReplicaManager broker=3D1001] Produce to l= ocal log in 0 ms (kafka.server.ReplicaManager) [2018-08-01 17:21:11,210] DEBUG [KafkaApi-1001] Produce request with correl= ation id 1 from client sarama on partition events-0 failed due to org.apach= e.kafka.common.errors.RecordTooLargeException (kafka.server.KafkaApis) {code} Debug logs from kafkacat : {code} %7|1533144071.204|SEND|rdkafka#consumer-1| [thrd:localhost:9092/1001]: loca= lhost:9092/1001: Sent FetchRequest (v4, 70 bytes @ 0, CorrId 89) %7|1533144071.309|RECV|rdkafka#consumer-1| [thrd:localhost:9092/1001]: loca= lhost:9092/1001: Received FetchResponse (v4, 50 bytes, CorrId 89, rtt 104.6= 2ms) %7|1533144071.309|FETCH|rdkafka#consumer-1| [thrd:localhost:9092/1001]: loc= alhost:9092/1001: Topic events [0] MessageSet size 0, error "Success", MaxO= ffset 2, Ver 2/2 %7|1533144071.309|BACKOFF|rdkafka#consumer-1| [thrd:localhost:9092/1001]: l= ocalhost:9092/1001: events [0]: Fetch backoff for 500ms: Broker: No more me= ssages %7|1533144071.309|FETCH|rdkafka#consumer-1| [thrd:localhost:9092/1001]: loc= alhost:9092/1001: Topic events [0] in state active at offset 0 (1/100000 ms= gs, 0/1000000 kb queued, opv 2) is not fetchable: fetch backed off %7|1533144071.309|FETCHADD|rdkafka#consumer-1| [thrd:localhost:9092/1001]: = localhost:9092/1001: Removed events [0] from fetch list (0 entries, opv 2) %7|1533144071.309|FETCH|rdkafka#consumer-1| [thrd:localhost:9092/1001]: loc= alhost:9092/1001: Fetch backoff for 499ms % Reached end of topic events [0] at offset 2 {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)