flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eron Wright <eronwri...@gmail.com>
Subject Re: How can I handle backpressure with event time.
Date Thu, 25 May 2017 16:06:01 GMT
Try setting the assigner on the Kafka consumer, rather than on the
DataStream:
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/
kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission

I believe this will produce a per-partition assigner and forward only the
minimum watermark across all partitions.

Hope this helps,
-Eron

On Thu, May 25, 2017 at 3:21 AM, yunfan123 <yunfanfighting@foxmail.com>
wrote:

> For example, I want to merge two kafka topics (named topicA and topicB) by
> the specific key with a max timeout.
> I use event time and class BoundedOutOfOrdernessTimestampExtractor to
> generate water mark.
> When some partitions of topicA be delayed by backpressure, and the delays
> exceeds my max timeout.
> It results in all of my delayed partition in topicA (also corresponding
> data
> in topicB) can't be merged.
> What I want is if backpressure happens, consumers can only consume depends
> on my event time.
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/How-can-I-
> handle-backpressure-with-event-time-tp13313.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>

Mime
View raw message