Return-Path: X-Original-To: apmail-flink-user-archive@minotaur.apache.org Delivered-To: apmail-flink-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1359D18F4E for ; Sat, 23 Apr 2016 05:48:50 +0000 (UTC) Received: (qmail 64621 invoked by uid 500); 23 Apr 2016 05:48:49 -0000 Delivered-To: apmail-flink-user-archive@flink.apache.org Received: (qmail 64528 invoked by uid 500); 23 Apr 2016 05:48:49 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flink.apache.org Delivered-To: mailing list user@flink.apache.org Received: (qmail 64519 invoked by uid 99); 23 Apr 2016 05:48:49 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 23 Apr 2016 05:48:49 +0000 Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com [209.85.217.172]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id BA2B01A008B for ; Sat, 23 Apr 2016 05:48:48 +0000 (UTC) Received: by mail-lb0-f172.google.com with SMTP id b1so47304566lbi.1 for ; Fri, 22 Apr 2016 22:48:48 -0700 (PDT) X-Gm-Message-State: AOPr4FVxkwdXj2seQAQemuJXc1VwEc7W15FkkozW3udN3aOM10GWy2cJbVA39PN3mzs0btZmGXAEGOd12cpUtA== X-Received: by 10.112.209.99 with SMTP id ml3mr9815014lbc.26.1461390527177; Fri, 22 Apr 2016 22:48:47 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Aljoscha Krettek Date: Sat, 23 Apr 2016 05:48:37 +0000 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Flink program without a line of code To: user@flink.apache.org Content-Type: multipart/alternative; boundary=001a11c339a6402ea50531207f46 --001a11c339a6402ea50531207f46 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, I think if the Table API/SQL API evolves enough it should be able to supply a Flink program as just an SQL query with source/sink definitions. Hopefully, in the future. :-) Cheers, Aljoscha On Fri, 22 Apr 2016 at 23:10 Fabian Hueske wrote: > Hi Alex, > > welcome to the Flink community! > Right now, there is no way to specify a Flink program without writing cod= e > (Java, Scala, Python(beta)). > > In principle it is possible to put such functionality on top of the > DataStream or DataSet APIs. > This has been done before for other programming APIs (Flink's own > libraries Table API, Gelly, FlinkML, and externals Apache Beam / Google > DataFlow, Mahout, Cascading, ...). However, all of these are again > programming APIs, some specialized for certain use-cases. > > Specifying Flink programs by config files (or graphically) would require = a > data model, a DataStream/DataSet program generator and probably a code > generation component. > > Best, Fabian > > 2016-04-22 18:41 GMT+02:00 Alexander Smirnov >: > >> Hi guys! >> >> I=E2=80=99m new to Flink, and actually to this mailing list as well :) t= his is my >> first message. >> I=E2=80=99m still reading the documentation and I would say Flink is an = amazing >> system!! Thanks everybody who participated in the development! >> >> The information I didn=E2=80=99t find in the documentation - if it is po= ssible to >> describe data(stream) transformation without any code (Java/Scala). >> I mean if it is possible to describe datasource functions, all of the >> operators, connections between them, and sinks in a plain text >> configuration file and then feed it to Flink. >> In this case it would be possible to change data flow without >> recompilation/redeployment. >> >> Is there a similar functionality in Flink? May be some third party plugi= n? >> >> Thank you, >> Alex > > > --001a11c339a6402ea50531207f46 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,
I think if the Table API/SQL API evolves enough it= should be able to supply a Flink program as just an SQL query with source/= sink definitions. Hopefully, in the future. :-)

Ch= eers,
Aljoscha

On Fri, 22 Apr 2016 at 23:10 Fabian Hueske <fhueske@gmail.com> wrote:
Hi Alex,

= welcome to the Flink community!
Right now, there is no way to spe= cify a Flink program without writing code (Java, Scala, Python(beta)).
<= br>
In principle it is possible to put such functionality on top of th= e DataStream or DataSet APIs.
This has been done before for other progr= amming APIs (Flink's own libraries Table API, Gelly, FlinkML, and exter= nals Apache Beam / Google DataFlow, Mahout, Cascading, ...). However, all o= f these are again programming APIs, some specialized for certain use-cases.=

Specifying Flink programs by config files (or graphicall= y) would require a data model, a DataStream/DataSet program generator and p= robably a code generation component.

Best, Fabian

2016-04-= 22 18:41 GMT+02:00 Alexander Smirnov <alexander.smirnoff@gmail.= com>:
Hi guys!

I=E2=80=99m new to Flink, and actually to this mailing list as well :) this= is my first message.
I=E2=80=99m still reading the documentation and I would say Flink is an ama= zing system!! Thanks everybody who participated in the development!

The information I didn=E2=80=99t find in the documentation - if it is possi= ble to describe data(stream) transformation without any code (Java/Scala).<= br> I mean if it is possible to describe datasource functions, all of the opera= tors, connections between them, and sinks in a plain text configuration fil= e and then feed it to Flink.
In this case it would be possible to change data flow without recompilation= /redeployment.

Is there a similar functionality in Flink? May be some third party plugin?<= br>
Thank you,
Alex

--001a11c339a6402ea50531207f46--