flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Piotr Nowojski <pi...@data-artisans.com>
Subject Re: How to broadcast messages to all task manager instances in cluster?
Date Fri, 11 May 2018 11:30:59 GMT
Hi,

I don’t quite understand your problem. If you broadcast message as an input to your operator
that depends on this configuration, each instance of your operator will receive this configuration.
It shouldn't matter whether Flink scheduled your operator on one, some or all of the TaskManagers.
It only should matter if operators running your configuration sensitive code receive the broadcasted
message.


DataStream<> input = xxx;
DataStream<> controlConfigInput = yyy;

DataStream<> data = input.
	.do()
	.something()
	.fancy();

controlConfigInput.broadcast()
	.connect(data)
	.flatMap(new MyFancyOperatorThatDependsOnConfigStream())

Or slide 36 from here: https://www.slideshare.net/dataArtisans/apache-flink-datastream-api-basics
<https://www.slideshare.net/dataArtisans/apache-flink-datastream-api-basics>

Piotrek

> On 11 May 2018, at 11:11, Di Tang <tangdi.bupt@gmail.com> wrote:
> 
> Hi guys:
> 
> I have a Flink job which contains multiple pipelines. Each pipeline depends on some configuration.
I want to make the configuration dynamic and effective after change so I created a data source
which periodically poll the database storing the configuration. However, how can I broadcast
the events to all task manager instances?  The datastream.broadcast() only applies to the
parallel instances of operator. And I don't want to connect the configuration data source
to each pipeline because it is too verbose. If Flink cannot explicitly broadcast messages
to task managers, is there any method to guarantee the parallel operator is distributed on
all task managers?
> 
> Thanks,
> Di


Mime
View raw message