flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephan Ewen <se...@apache.org>
Subject Re: Externalizing the Flink connectors
Date Thu, 10 Dec 2015 15:11:52 GMT
I like this a lot. It has multiple advantages:

  - Obviously more frequent connector updates without being forced to go to
a snapshot version
  - Reduce complexity and build time of the core flink repository

We should make sure that for example 0.10.x connectors always work with
0.10.x flink core releases.

Would we loose test coverage by putting the connectors into a separate
repository/maven project?



On Thu, Dec 10, 2015 at 3:45 PM, Fabian Hueske <fhueske@gmail.com> wrote:

> Sounds like a good idea to me.
>
> +1
>
> Fabian
>
> 2015-12-10 15:31 GMT+01:00 Maximilian Michels <mxm@apache.org>:
>
> > Hi squirrels,
> >
> > By this time, we have numerous connectors which let you insert data
> > into Flink or output data from Flink.
> >
> > On the streaming side we have
> >
> > - RollingSink
> > - Flume
> > - Kafka
> > - Nifi
> > - RabbitMQ
> > - Twitter
> >
> > On the batch side we have
> >
> > - Avro
> > - Hadoop compatibility
> > - HBase
> > - HCatalog
> > - JDBC
> >
> >
> > Many times we would have liked to release updates to the connectors or
> > even create new ones in between Flink releases. This is currently not
> > possible because the connectors are part of the main repository.
> >
> > Therefore, I have created a new repository at
> > https://git-wip-us.apache.org/repos/asf/flink-connectors.git. The idea
> > is to externalize the connectors to this repository. We can then
> > update and release them independently of the main Flink repository. I
> > think this will give us more flexibility in the development process.
> >
> > What do you think about this idea?
> >
> > Cheers,
> > Max
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message