Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 37CEC200C7B for ; Sat, 20 May 2017 20:39:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 366F0160BBE; Sat, 20 May 2017 18:39:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 7C00D160BA7 for ; Sat, 20 May 2017 20:39:09 +0200 (CEST) Received: (qmail 70743 invoked by uid 500); 20 May 2017 18:39:08 -0000 Mailing-List: contact dev-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list dev@flink.apache.org Received: (qmail 70731 invoked by uid 99); 20 May 2017 18:39:08 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 May 2017 18:39:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id C0ACA190D92 for ; Sat, 20 May 2017 18:39:07 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id AnQ9IJqesCCk for ; Sat, 20 May 2017 18:39:06 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 142275F2AC for ; Sat, 20 May 2017 18:39:06 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 2B101E02C8 for ; Sat, 20 May 2017 18:39:05 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 7074821B56 for ; Sat, 20 May 2017 18:39:04 +0000 (UTC) Date: Sat, 20 May 2017 18:39:04 +0000 (UTC) From: "sunjincheng (JIRA)" To: dev@flink.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (FLINK-6650) Fix Non-windowed group-aggregate error when using append-table mode. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Sat, 20 May 2017 18:39:10 -0000 sunjincheng created FLINK-6650: ---------------------------------- Summary: Fix Non-windowed group-aggregate error when using append-table mode. Key: FLINK-6650 URL: https://issues.apache.org/jira/browse/FLINK-6650 Project: Flink Issue Type: Sub-task Reporter: sunjincheng Assignee: sunjincheng When I test Non-windowed group-aggregate with {{stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum, weightAvgFun('a, 'b)).toAppendStream[Row].addSink(new StreamITCase.StringSink)}}, I got the error as follows: {code} org.apache.flink.table.api.TableException: Table is not an append-only table. Output needs to handle update and delete changes. at org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:631) at org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:607) at org.apache.flink.table.api.scala.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:219) at org.apache.flink.table.api.scala.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:195) at org.apache.flink.table.api.scala.TableConversions.toAppendStream(TableConversions.scala:121) {code} The reason is {{DataStreamGroupAggregate#producesUpdates}} as follows: {code} override def producesUpdates = true {code} I think in the view of the user, what user want are(for example): Data: {code} val data = List( (1L, 1, "Hello"), (2L, 2, "Hello"), (3L, 3, "Hello"), (4L, 4, "Hello"), (5L, 5, "Hello"), (6L, 6, "Hello"), (7L, 7, "Hello World"), (8L, 8, "Hello World"), (20L, 20, "Hello World")) {code} *Case1: TableAPI {code} stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum).toRetractStream[Row] .addSink(new StreamITCase.RetractingSink) {code} Result {code} 1 3 6 10 15 21 28 36 56 {code} * Case 2: TableAPI {code} stream.toTable(tEnv, 'a, 'b, 'c).select('a.sum).toRetractStream[Row] .addSink(new StreamITCase.RetractingSink) {code} Result: {code} 56 {code} In fact about #Case 1,we can using unbounded OVER windows, as follows: TableAPI {code} stream.toTable(tEnv, 'a, 'b, 'c, 'proctime.proctime) .window(Over orderBy 'proctime preceding UNBOUNDED_ROW as 'w) .select('a.sum over 'w) .toAppendStream[Row].addSink(new StreamITCase.StringSink) {code} Result {code} Same as #Case1 {code} But after the [FLINK-6649 | https://issues.apache.org/jira/browse/FLINK-6649] OVER can not express the #Case1 with earlyFiring. So I still think Non-windowed group-aggregate not always update-table, user can decide which mode to use. Is there any drawback to this improvement? Welcome anyone feedback? -- This message was sent by Atlassian JIRA (v6.3.15#6346)