Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B873D200CFD for ; Wed, 6 Sep 2017 12:21:09 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B7272160BCB; Wed, 6 Sep 2017 10:21:09 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id ADD931609ED for ; Wed, 6 Sep 2017 12:21:08 +0200 (CEST) Received: (qmail 83476 invoked by uid 500); 6 Sep 2017 10:21:06 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 83467 invoked by uid 99); 6 Sep 2017 10:21:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Sep 2017 10:21:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 26D5A183B6F for ; Wed, 6 Sep 2017 10:21:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.021 X-Spam-Level: X-Spam-Status: No, score=-4.021 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.001] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id 7zZ4dAIMoegP for ; Wed, 6 Sep 2017 10:21:03 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id 681DD5FCCB for ; Wed, 6 Sep 2017 10:21:02 +0000 (UTC) Received: (qmail 83123 invoked by uid 99); 6 Sep 2017 10:21:01 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Sep 2017 10:21:01 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 85D6FF32EA; Wed, 6 Sep 2017 10:21:01 +0000 (UTC) From: fhueske To: issues@flink.incubator.apache.org Reply-To: issues@flink.incubator.apache.org References: In-Reply-To: Subject: [GitHub] flink pull request #4625: [FLINK-6233] [table] Support time-bounded stream i... Content-Type: text/plain Message-Id: <20170906102101.85D6FF32EA@git1-us-west.apache.org> Date: Wed, 6 Sep 2017 10:21:01 +0000 (UTC) archived-at: Wed, 06 Sep 2017 10:21:09 -0000 Github user fhueske commented on a diff in the pull request: https://github.com/apache/flink/pull/4625#discussion_r137228034 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/join/TimeBoundedStreamInnerJoin.scala --- @@ -0,0 +1,533 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.join + +import java.text.SimpleDateFormat +import java.util +import java.util.Map.Entry +import java.util.{Date, List => JList} + +import org.apache.flink.api.common.functions.FlatJoinFunction +import org.apache.flink.api.common.state._ +import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation} +import org.apache.flink.api.java.typeutils.ListTypeInfo +import org.apache.flink.configuration.Configuration +import org.apache.flink.streaming.api.functions.co.CoProcessFunction +import org.apache.flink.table.codegen.Compiler +import org.apache.flink.table.runtime.CRowWrappingCollector +import org.apache.flink.table.runtime.join.JoinTimeIndicator.JoinTimeIndicator +import org.apache.flink.table.runtime.types.CRow +import org.apache.flink.table.util.Logging +import org.apache.flink.types.Row +import org.apache.flink.util.Collector + +/** + * A CoProcessFunction to execute time-bounded stream inner-join. + * + * Sample criteria: + * + * L.time between R.time + X and R.time + Y + * or AND R.time between L.time - Y and L.time - X + * + * @param leftLowerBound X + * @param leftUpperBound Y + * @param allowedLateness the lateness allowed for the two streams + * @param leftType the input type of left stream + * @param rightType the input type of right stream + * @param genJoinFuncName the function code of other non-equi conditions + * @param genJoinFuncCode the function name of other non-equi conditions + * @param timeIndicator indicate whether joining on proctime or rowtime + * + */ +class TimeBoundedStreamInnerJoin( + private val leftLowerBound: Long, + private val leftUpperBound: Long, + private val allowedLateness: Long, + private val leftType: TypeInformation[Row], + private val rightType: TypeInformation[Row], + private val genJoinFuncName: String, + private val genJoinFuncCode: String, + private val leftTimeIdx: Int, + private val rightTimeIdx: Int, + private val timeIndicator: JoinTimeIndicator) + extends CoProcessFunction[CRow, CRow, CRow] + with Compiler[FlatJoinFunction[Row, Row, Row]] + with Logging { + + private var cRowWrapper: CRowWrappingCollector = _ + + // the join function for other conditions + private var joinFunction: FlatJoinFunction[Row, Row, Row] = _ + + // cache to store the left stream records + private var leftCache: MapState[Long, JList[Row]] = _ + // cache to store right stream records + private var rightCache: MapState[Long, JList[Row]] = _ + + // state to record the timer on the left stream. 0 means no timer set + private var leftTimerState: ValueState[Long] = _ + // state to record the timer on the right stream. 0 means no timer set + private var rightTimerState: ValueState[Long] = _ + + private val leftRelativeSize: Long = -leftLowerBound + private val rightRelativeSize: Long = leftUpperBound + + private val relativeWindowSize = rightRelativeSize + leftRelativeSize + + private var leftOperatorTime: Long = 0L + private var rightOperatorTime: Long = 0L + + private var backPressureSuggestion: Long = 0L + + if (relativeWindowSize <= 0) { + LOG.warn("The relative window size is non-positive, please check the join conditions.") + } + + if (allowedLateness < 0) { + throw new IllegalArgumentException("The allowed lateness must be non-negative.") + } + + + /** + * For holding back watermarks. + * + * @return the maximum delay for the outputs + */ + def getMaxOutputDelay = Math.max(leftRelativeSize, rightRelativeSize) + allowedLateness; + + /** + * For dynamic query optimization. + * + * @return the suggested offset time for back-pressure + */ + def getBackPressureSuggestion = backPressureSuggestion + + override def open(config: Configuration) { + val clazz = compile( + getRuntimeContext.getUserCodeClassLoader, + genJoinFuncName, + genJoinFuncCode) + joinFunction = clazz.newInstance() + + cRowWrapper = new CRowWrappingCollector() + cRowWrapper.setChange(true) + + // Initialize the data caches. + val leftListTypeInfo: TypeInformation[JList[Row]] = new ListTypeInfo[Row](leftType) + val leftStateDescriptor: MapStateDescriptor[Long, JList[Row]] = + new MapStateDescriptor[Long, JList[Row]](timeIndicator + "InnerJoinLeftCache", + BasicTypeInfo.LONG_TYPE_INFO.asInstanceOf[TypeInformation[Long]], leftListTypeInfo) + leftCache = getRuntimeContext.getMapState(leftStateDescriptor) + + val rightListTypeInfo: TypeInformation[JList[Row]] = new ListTypeInfo[Row](rightType) + val rightStateDescriptor: MapStateDescriptor[Long, JList[Row]] = + new MapStateDescriptor[Long, JList[Row]](timeIndicator + "InnerJoinRightCache", + BasicTypeInfo.LONG_TYPE_INFO.asInstanceOf[TypeInformation[Long]], rightListTypeInfo) + rightCache = getRuntimeContext.getMapState(rightStateDescriptor) + + // Initialize the timer states. + val leftTimerStateDesc: ValueStateDescriptor[Long] = + new ValueStateDescriptor[Long](timeIndicator + "InnerJoinLeftTimerState", + classOf[Long]) + leftTimerState = getRuntimeContext.getState(leftTimerStateDesc) + + val rightTimerStateDesc: ValueStateDescriptor[Long] = + new ValueStateDescriptor[Long](timeIndicator + "InnerJoinRightTimerState", + classOf[Long]) + rightTimerState = getRuntimeContext.getState(rightTimerStateDesc) + } + + /** + * Process records from the left stream. + * + * @param cRowValue the input record + * @param ctx the context to register timer or get current time + * @param out the collector for outputting results + * + */ + override def processElement1( + cRowValue: CRow, + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + out: Collector[CRow]): Unit = { + val timeForRecord: Long = getTimeForRecord(ctx, cRowValue, true) + getCurrentOperatorTime(ctx) + processElement( + cRowValue, + timeForRecord, + ctx, + out, + leftOperatorTime, + rightOperatorTime, + rightTimerState, + leftCache, + rightCache, + true + ) + } + + /** + * Process records from the right stream. + * + * @param cRowValue the input record + * @param ctx the context to get current time + * @param out the collector for outputting results + * + */ + override def processElement2( + cRowValue: CRow, + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + out: Collector[CRow]): Unit = { + val timeForRecord: Long = getTimeForRecord(ctx, cRowValue, false) + getCurrentOperatorTime(ctx) + processElement( + cRowValue, + timeForRecord, + ctx, + out, + rightOperatorTime, + leftOperatorTime, + leftTimerState, + rightCache, + leftCache, + false + ) + } + + /** + * Put a record from the input stream into the cache and iterate the opposite cache to + * output records meeting the join conditions. If there is no timer set for the OPPOSITE + * STREAM, register one. + */ + private def processElement( + cRowValue: CRow, + timeForRecord: Long, + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + out: Collector[CRow], + myWatermark: Long, + oppositeWatermark: Long, + oppositeTimeState: ValueState[Long], + recordListCache: MapState[Long, JList[Row]], + oppositeCache: MapState[Long, JList[Row]], + leftRecord: Boolean): Unit = { + if (relativeWindowSize > 0) { + //TODO Shall we consider adding a method for initialization with the context and collector? + cRowWrapper.out = out + + val record = cRowValue.row + + //TODO Only if the time of the record is greater than the watermark, can we continue. + if (timeForRecord >= myWatermark - allowedLateness) { + val oppositeLowerBound: Long = + if (leftRecord) timeForRecord - rightRelativeSize else timeForRecord - leftRelativeSize + + val oppositeUpperBound: Long = + if (leftRecord) timeForRecord + leftRelativeSize else timeForRecord + rightRelativeSize + + // Put the record into the cache for later use. + val recordList = if (recordListCache.contains(timeForRecord)) { + recordListCache.get(timeForRecord) + } else { + new util.ArrayList[Row]() + } + recordList.add(record) + recordListCache.put(timeForRecord, recordList) + + // Register a timer on THE OTHER STREAM to remove records from the cache once they are + // expired. + if (oppositeTimeState.value == 0) { + registerCleanUpTimer( + ctx, timeForRecord, oppositeWatermark, oppositeTimeState, leftRecord, true) + } + + // Join the record with records from the opposite stream. + val oppositeIterator = oppositeCache.iterator() + var oppositeEntry: Entry[Long, util.List[Row]] = null + var oppositeTime: Long = 0L; + while (oppositeIterator.hasNext) { + oppositeEntry = oppositeIterator.next + oppositeTime = oppositeEntry.getKey + if (oppositeTime < oppositeLowerBound - allowedLateness) { + //TODO Considering the data out-of-order, we should not remove records here. + } else if (oppositeTime >= oppositeLowerBound && oppositeTime <= oppositeUpperBound) { + val oppositeRows = oppositeEntry.getValue + var i = 0 + if (leftRecord) { + while (i < oppositeRows.size) { + joinFunction.join(record, oppositeRows.get(i), cRowWrapper) + i += 1 + } + } else { + while (i < oppositeRows.size) { + joinFunction.join(oppositeRows.get(i), record, cRowWrapper) + i += 1 + } + } + } else if (oppositeTime > oppositeUpperBound) { + //TODO If the keys are ordered, can we break here? + } + } + } else { + //TODO Need some extra logic here? + LOG.warn(s"$record is out-of-date.") + } + } + } + + /** + * Register a timer for cleaning up records in a specified time. + * + * @param ctx the context to register timer + * @param timeForRecord time for the input record + * @param oppositeWatermark watermark of the opposite stream + * @param timerState stores the timestamp for the next timer + * @param leftRecord record from the left or the right stream + * @param firstTimer whether this is the first timer + */ + private def registerCleanUpTimer( + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + timeForRecord: Long, + oppositeWatermark: Long, + timerState: ValueState[Long], + leftRecord: Boolean, + firstTimer: Boolean): Unit = { + val cleanUpTime = timeForRecord + (if (leftRecord) leftRelativeSize else rightRelativeSize) + + allowedLateness + 1 + registerTimer(ctx, !leftRecord, cleanUpTime) + LOG.debug(s"Register a clean up timer on the ${if (leftRecord) "RIGHT" else "LEFT"} state:" + + s" timeForRecord = ${timeForRecord}, cleanUpTime = ${cleanUpTime}, oppositeWatermark = " + + s"${oppositeWatermark}") + timerState.update(cleanUpTime) + if (cleanUpTime <= oppositeWatermark + allowedLateness && firstTimer) { + backPressureSuggestion = + if (leftRecord) (oppositeWatermark + allowedLateness - cleanUpTime) + else -(oppositeWatermark + allowedLateness - cleanUpTime) + LOG.warn("The clean timer for the " + + s"${if (leftRecord) "left" else "right"}" + + s" stream is lower than ${if (leftRecord) "right" else "left"} watermark." + + s" requiredTime = ${formatTime(cleanUpTime)}, watermark = ${formatTime(oppositeWatermark)}," + + s"backPressureSuggestion = " + s"${backPressureSuggestion}.") + } + } + + + /** + * Called when a registered timer is fired. + * Remove records which are earlier than the expiration time, + * and register a new timer for the earliest remaining records. + * + * @param timestamp the timestamp of the timer + * @param ctx the context to register timer or get current time + * @param out the collector for returning result values + */ + override def onTimer( + timestamp: Long, + ctx: CoProcessFunction[CRow, CRow, CRow]#OnTimerContext, + out: Collector[CRow]): Unit = { + getCurrentOperatorTime(ctx) + //TODO In the future, we should separate the left and right watermarks. Otherwise, the + //TODO registered timer of the faster stream will be delayed, even if the watermarks have + //TODO already been emitted by the source. + if (leftTimerState.value == timestamp) { + val rightExpirationTime = leftOperatorTime - rightRelativeSize - allowedLateness - 1 + removeExpiredRecords( + timestamp, + rightExpirationTime, + leftOperatorTime, + rightCache, + leftTimerState, + ctx, + false + ) + } + + if (rightTimerState.value == timestamp) { --- End diff -- Yes, you are right. ---