flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tzu-Li (Gordon) Tai" <tzuli...@apache.org>
Subject Re: FlinkKafkaConsumer010 does not start from the next record on startup from offsets in Kafka
Date Wed, 22 Nov 2017 12:57:19 GMT
Hi Robert,

As expected with exactly-once guarantees, a record that caused a Flink job
to fail will be attempted to be reprocessed on the restart of the job.

For some specific "corrupt" record that causes the job to fall into a
fail-and-restart loop, there is a way to let the Kafka consumer skip that
specific "corrupt" record. To do that, return null when attempting to
deserialize the corrupted record (specifically, that would be the
`deserialize` method on the provided `DeserializationSchema`).

Cheers,
Gordon



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Mime
View raw message