flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ilya Karpov <idkf...@gmail.com>
Subject Re: kafka corrupt record exception
Date Tue, 02 Apr 2019 07:36:35 GMT
According to docs (here: https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#the-deserializationschema
<https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#the-deserializationschema>
, last paragraph) that’s an expected behaviour. May be you should think about writing your
own deserialisation schema to skip corrupted messages.

> 1 апр. 2019 г., в 18:19, Sushant Sawant <sushantsawant7007@gmail.com> написал(а):
> 
> Hi,
> Thanks for reply. 
> But is there a way one could skip this corrupt record from Flink consumer?
> Flink job goes in a loop, it restarts and then fails again at same record.
> 
> 
> On Mon, 1 Apr 2019, 07:34 Congxian Qiu, <qcx978132955@gmail.com <mailto:qcx978132955@gmail.com>>
wrote:
> Hi
> As you said, consume from ubuntu terminal has the same error, maybe you could send a
email to kafka user maillist.
> 
> Best, Congxian
> On Apr 1, 2019, 05:26 +0800, Sushant Sawant <sushantsawant7007@gmail.com <mailto:sushantsawant7007@gmail.com>>,
wrote:
>> Hi team,
>> I am facing this exception,
>> org.apache.kafka.common.KafkaException: Received exception when fetching the next
record from topic_log-3. If needed, please seek past the record to continue consumption.
>> 
>>         at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1076)
>> 
>>         at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:944)
>> 
>>         at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:567)
>> 
>>         at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:528)
>> 
>>         at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
>> 
>>         at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
>> 
>>         at org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:257)
>> 
>> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size is
less than the minimum record overhead (14)
>> 
>> 
>> 
>> Also, when I consume message from ubuntu terminal consumer, I get same error.
>> 
>> How can skip this corrupt record?
>> 
>> 
>> 
>> 
>> 
>> 
>> 


Mime
View raw message