Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 626F920049E for ; Thu, 10 Aug 2017 16:34:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 6147B16B674; Thu, 10 Aug 2017 14:34:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 445B716B676 for ; Thu, 10 Aug 2017 16:34:09 +0200 (CEST) Received: (qmail 35067 invoked by uid 500); 10 Aug 2017 14:34:08 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 34997 invoked by uid 99); 10 Aug 2017 14:34:08 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Aug 2017 14:34:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id D64E71A0706 for ; Thu, 10 Aug 2017 14:34:07 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.021 X-Spam-Level: X-Spam-Status: No, score=-4.021 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.001] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id h8eghsckybnZ for ; Thu, 10 Aug 2017 14:34:06 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id 8582560E75 for ; Thu, 10 Aug 2017 14:34:04 +0000 (UTC) Received: (qmail 33625 invoked by uid 99); 10 Aug 2017 14:34:03 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Aug 2017 14:34:03 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 3C055F5559; Thu, 10 Aug 2017 14:34:03 +0000 (UTC) From: fhueske To: issues@flink.incubator.apache.org Reply-To: issues@flink.incubator.apache.org References: In-Reply-To: Subject: [GitHub] flink pull request #4471: [FLINK-6094] [table] Implement stream-stream proct... Content-Type: text/plain Message-Id: <20170810143403.3C055F5559@git1-us-west.apache.org> Date: Thu, 10 Aug 2017 14:34:03 +0000 (UTC) archived-at: Thu, 10 Aug 2017 14:34:10 -0000 Github user fhueske commented on a diff in the pull request: https://github.com/apache/flink/pull/4471#discussion_r132277678 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/util/UpdatingPlanChecker.scala --- @@ -90,40 +96,86 @@ object UpdatingPlanChecker { // resolve names of input fields .map(io => (inNames.get(io._1), io._2)) - // filter by input keys - val outKeys = inOutNames.filter(io => keys.get.contains(io._1)).map(_._2) - // check if all keys have been preserved - if (outKeys.nonEmpty && outKeys.length == keys.get.length) { + // filter by input keyAncestors + val outKeyAncesters = inOutNames + .filter(io => keyAncestors.get.map(e => e._1).contains(io._1)) + .map(io => (io._2, keyAncestors.get.find(ka => ka._1 == io._1).get._2)) + + // check if all keyAncestors have been preserved + if (outKeyAncesters.nonEmpty && + outKeyAncesters.map(ka => ka._2).distinct.length == + keyAncestors.get.map(ka => ka._2).distinct.length) { // all key have been preserved (but possibly renamed) - keys = Some(outKeys.toArray) + Some(outKeyAncesters.toList) } else { // some (or all) keys have been removed. Keys are no longer unique and removed - keys = None + None } + } else { + None } + case _: DataStreamOverAggregate => - super.visit(node, ordinal, parent) - // keys are always forwarded by Over aggregate + // keyAncestors are always forwarded by Over aggregate + visit(node.getInput(0)) case a: DataStreamGroupAggregate => - // get grouping keys + // get grouping keyAncestors val groupKeys = a.getRowType.getFieldNames.asScala.take(a.getGroupings.length) - keys = Some(groupKeys.toArray) + Some(groupKeys.map(e => (e, e)).toList) case w: DataStreamGroupWindowAggregate => - // get grouping keys + // get grouping keyAncestors val groupKeys = w.getRowType.getFieldNames.asScala.take(w.getGroupings.length).toArray // get window start and end time val windowStartEnd = w.getWindowProperties.map(_.name) // we have only a unique key if at least one window property is selected if (windowStartEnd.nonEmpty) { - keys = Some(groupKeys ++ windowStartEnd) + Some((groupKeys ++ windowStartEnd).map(e => (e, e)).toList) + } else { + None + } + + case j: DataStreamJoin => + val leftKeyAncestors = visit(j.getLeft) + val rightKeyAncestors = visit(j.getRight) + if (!leftKeyAncestors.isDefined || !rightKeyAncestors.isDefined) { + None + } else { + // both left and right contain keys + val leftJoinKeys = --- End diff -- Is easier to compute with: ``` val leftFieldNames = j.getLeft.getRowType.getFieldNames val leftJoinKeys: Seq[String] = j.getJoinInfo.leftKeys.asScala.map(leftFieldNames.get(_)) ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. ---