Return-Path: X-Original-To: apmail-flink-dev-archive@www.apache.org Delivered-To: apmail-flink-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EFD1919CD9 for ; Mon, 14 Mar 2016 10:02:11 +0000 (UTC) Received: (qmail 77664 invoked by uid 500); 14 Mar 2016 10:02:11 -0000 Delivered-To: apmail-flink-dev-archive@flink.apache.org Received: (qmail 77597 invoked by uid 500); 14 Mar 2016 10:02:11 -0000 Mailing-List: contact dev-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list dev@flink.apache.org Received: (qmail 77586 invoked by uid 99); 14 Mar 2016 10:02:11 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 14 Mar 2016 10:02:11 +0000 Received: from macbook-pro-3.fritz.box (ip5b40315a.dynamic.kabel-deutschland.de [91.64.49.90]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 064AE1A0040 for ; Mon, 14 Mar 2016 10:02:10 +0000 (UTC) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.2 \(3112\)) Subject: Re: [streaming, scala] Scala DataStream#addSink returns Java DataStreamSink From: Aljoscha Krettek In-Reply-To: Date: Mon, 14 Mar 2016 11:02:07 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <3E3AB9A3-11DE-4EE8-A8C9-5879CF43CB8C@apache.org> References: <56E55114.8040806@apache.org> To: dev@flink.apache.org X-Mailer: Apple Mail (2.3112) By the way, I don=E2=80=99t think it=E2=80=99s a bug that addSink() = returns the Java DataStreamSink. Having a Scala specific version of a = DataStreamSink would not add functionality in this place, just code = bloat. > On 14 Mar 2016, at 10:05, Fabian Hueske wrote: >=20 > Yes, we will have more of these issues in the future and each issue = will > need a separate discussion. > I don't think that clearly unintended errors (I hope we won't find any > intended errors) are a sufficient reason to break stable a stable API. > IMO, the question that needs to be answered how much of an issue it is = (put > it on a scale: bug > limitation > inconsistent API) and whether there = are > workarounds that avoid API breaking changes. >=20 > Cheers, Fabian >=20 >=20 >=20 > 2016-03-13 19:06 GMT+01:00 Gyula F=C3=B3ra : >=20 >> Hi, >>=20 >> I think this is an important question that will surely come up in = some >> cases in the future. >>=20 >> I see your point Robert, that we have promised api compatibility for = 1.x.y >> releases, but I am not sure that this should cover things that are = clearly >> just unintended errors in the api from our side. >>=20 >> I am not sure what would be the right action regarding issues like = this in >> the future. >>=20 >> Gyula >>=20 >> Chesnay Schepler ezt =C3=ADrta (id=C5=91pont: = 2016. m=C3=A1rc. 13., >> V, 12:37): >>=20 >>> On 13.03.2016 12:14, Robert Metzger wrote: >>>> I think its too early to fork off a 2.0 branch. I have absolutely = no >> idea >>>> when a 2.0 release becomes relevant, could be easily a year from = now. >>> at first i was going to agree with Robert, but then...I mean the = issue >>> with not allowing breaking changes >>> is that effectively this means we won't work on these issues until = 2.0 >>> comes around. Since otherwise, >>> the contributor would have to stash that change themselves in their = own >>> repository for god-knows how long. >>> Chances are that work will go to waste anyway because they forget / >>> delete it. >>>=20 >>> having a central place (not necessarily a separate branch, maybe a = repo >>> with a separate branch for every commit) >>> where we can stash this work could prove useful; instead of starting = to >>> work on these issues all at once for 2.0, >>> we could save some work by only having to rebase them in one way or >>> another. >>>=20 >>>> And for tracking API breaking changes, maybe it makes sense to = create a >>>> 2.0.0 version in JIRA and set the "fix-for" for the issue to 2.0. >>> +1 for adding a 2.0.0 version tag/. /This is the perfect use-case = for >> it.// >>>>=20 >>>> On Sun, Mar 13, 2016 at 12:08 PM, M=C3=A1rton Balassi < >>> balassi.marton@gmail.com> >>>> wrote: >>>>=20 >>>>> Ok, if that is what we promised let's stick to that. >>>>> Then would you suggest to open a release-2.0 branch and merge it >> there? >>>>>=20 >>>>> On Sun, Mar 13, 2016 at 11:43 AM, Robert Metzger = >>=20 >>>>> wrote: >>>>>=20 >>>>>> Hey, >>>>>> JIRA was down for quite a while yesterday. Sadly, I don't think = we >> can >>>>>> merge the change because its API breaking. >>>>>> One of the promises of the 1.0 release is that we are not = breaking >> any >>>>> APIs >>>>>> in the 1.x.y series of Flink. We can fix those issues with a 2.x >>> release. >>>>>>=20 >>>>>> On Sun, Mar 13, 2016 at 5:27 AM, M=C3=A1rton Balassi < >>>>> balassi.marton@gmail.com> >>>>>> wrote: >>>>>>=20 >>>>>>> The JIRA issue is FLINK-3610. >>>>>>>=20 >>>>>>> On Sat, Mar 12, 2016 at 8:39 PM, M=C3=A1rton Balassi < >>>>>> balassi.marton@gmail.com> >>>>>>> wrote: >>>>>>>=20 >>>>>>>> I have just come across a shortcoming of the streaming Scala = API: >> it >>>>>>>> completely lacks the Scala implementation of the DataStreamSink = and >>>>>>>> instead the Java version is used. [1] >>>>>>>>=20 >>>>>>>> I would regard this as a bug that needs a fix for 1.0.1. >>>>> Unfortunately >>>>>>>> this is also api-breaking. >>>>>>>>=20 >>>>>>>> Will post it to JIRA shortly - but issues.apache.org is >> unresponsive >>>>>> for >>>>>>>> me currently. Wanted to raise the issue here as it might affect = the >>>>>> api. >>>>>>>> [1] >>>>> https://github.com/apache/flink/blob/master/flink-streaming-scala >>>>>>>>=20 >> /src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala >>>>>>>> #L928-L929 >>>>>>>>=20 >>>=20 >>>=20 >>=20