flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gyula Fóra <gyula.f...@gmail.com>
Subject Re: Adding the streaming project to the main repository
Date Mon, 18 Aug 2014 19:38:02 GMT
Hey,

The simple reduce is like what you said yes. But there are also grouped
reduce which you can use by calling .groupBy(keyposition) and then reduce.

Also there is reduce for windows: batchReduce and windowReduce batch gives
you a sliding window over a predefined number of records, and window reduce
gices you the same but by time. (also there are grouped versions of these)

Cheers,
Gyula


On Mon, Aug 18, 2014 at 9:19 PM, Fabian Hueske <fhueske@apache.org> wrote:

> Hi folks,
>
> great work!
>
> Looking at the example I have a quick question. What's the semantics of the
> Reduce operator? I guess its not a window reduce.
> Is it backed by a hash table and every input tuple updates the hash table
> and returns the updated value?
>
> Cheers, Fabian
>
>
> 2014-08-18 20:53 GMT+02:00 Stephan Ewen <sewen@apache.org>:
>
> > The streaming code is in "flink-addons", for new/experimental code.
> >
> > Documents should come over the next days/weeks, definitely before we make
> > this part of the core.
> >
> > Right now, I would suggest to have a look at some of the examples, to
> get a
> > feeling for the addon, check for example this here:
> >
> >
> https://github.com/apache/incubator-flink/tree/master/flink-addons/flink-streaming/flink-streaming-examples/src/main/java/org/apache/flink/streaming/examples/wordcount
> >
> > (The example reads a file for simplicity, but the project also provides
> > connectors for Kafka, RabbitMQ, ...)
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message