kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Julien Fabre (JIRA)" <j...@apache.org>
Subject [jira] [Created] (KAFKA-7230) Empty Record created when producer failed due to RecordTooLargeException
Date Wed, 01 Aug 2018 17:31:00 GMT
Julien Fabre created KAFKA-7230:
-----------------------------------

             Summary: Empty Record created when producer failed due to RecordTooLargeException
                 Key: KAFKA-7230
                 URL: https://issues.apache.org/jira/browse/KAFKA-7230
             Project: Kafka
          Issue Type: Bug
    Affects Versions: 1.1.0
            Reporter: Julien Fabre


When a producer try to produce a RecordBatch which is bigger than the message.max.bytes value,
it fails with the errorĀ 
{code:java}org.apache.kafka.common.errors.RecordTooLargeException{code}
but an empty Record gets created.

While hitting the RecordTooLargeException is expected, I was not expecting seeing a new offset
with an empty Record in the Topic.

Is that a problem with Kafka or should the consumer handle this case ?

Test setup :
- Kafka 2.11-1.1.0
- The producer is written in Go, using a [SyncProducer|https://godoc.org/github.com/Shopify/sarama#SyncProducer]
from the Sarama library.
- The consumer is kafkacat version 1.3.1-13-ga6b599

Debugs logs from Kafka :
{code}
[2018-08-01 17:21:11,201] DEBUG Accepted connection from /172.17.0.1:33718 on /172.17.0.3:9092
and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize
[actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2018-08-01 17:21:11,201] DEBUG Processor 1 listening to new connection from /172.17.0.1:33718
(kafka.network.Processor)
[2018-08-01 17:21:11,203] DEBUG [ReplicaManager broker=1001] Request key events-0 unblocked
0 fetch requests. (kafka.server.ReplicaManager)
[2018-08-01 17:21:11,203] DEBUG [Partition events-0 broker=1001] High watermark updated to
2 [0 : 136] (kafka.cluster.Partition)
[2018-08-01 17:21:11,203] DEBUG Sessionless fetch context returning 1 partition(s) (kafka.server.SessionlessFetchContext)
[2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=1001] Request key events-0 unblocked
1 fetch requests. (kafka.server.ReplicaManager)
[2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=1001] Request key events-0 unblocked
0 producer requests. (kafka.server.ReplicaManager)
[2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=1001] Request key events-0 unblocked
0 DeleteRecordsRequest. (kafka.server.ReplicaManager)
[2018-08-01 17:21:11,204] DEBUG [ReplicaManager broker=1001] Produce to local log in 2 ms
(kafka.server.ReplicaManager)
[2018-08-01 17:21:11,205] DEBUG Created a new full FetchContext with 1 partition(s). Will
not try to create a new session. (kafka.server.FetchManager)
[2018-08-01 17:21:11,210] DEBUG [ReplicaManager broker=1001] Produce to local log in 0 ms
(kafka.server.ReplicaManager)
[2018-08-01 17:21:11,210] DEBUG [KafkaApi-1001] Produce request with correlation id 1 from
client sarama on partition events-0 failed due to org.apache.kafka.common.errors.RecordTooLargeException
(kafka.server.KafkaApis)
{code}

Debug logs from kafkacat :
{code}
%7|1533144071.204|SEND|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
Sent FetchRequest (v4, 70 bytes @ 0, CorrId 89)
%7|1533144071.309|RECV|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
Received FetchResponse (v4, 50 bytes, CorrId 89, rtt 104.62ms)
%7|1533144071.309|FETCH|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
Topic events [0] MessageSet size 0, error "Success", MaxOffset 2, Ver 2/2
%7|1533144071.309|BACKOFF|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
events [0]: Fetch backoff for 500ms: Broker: No more messages
%7|1533144071.309|FETCH|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
Topic events [0] in state active at offset 0 (1/100000 msgs, 0/1000000 kb queued, opv 2) is
not fetchable: fetch backed off
%7|1533144071.309|FETCHADD|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
Removed events [0] from fetch list (0 entries, opv 2)
%7|1533144071.309|FETCH|rdkafka#consumer-1| [thrd:localhost:9092/1001]: localhost:9092/1001:
Fetch backoff for 499ms
% Reached end of topic events [0] at offset 2
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message