Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 83EFA200CFA for ; Tue, 5 Sep 2017 23:50:28 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 826831609E7; Tue, 5 Sep 2017 21:50:28 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 86B5016165F for ; Tue, 5 Sep 2017 23:50:27 +0200 (CEST) Received: (qmail 66919 invoked by uid 500); 5 Sep 2017 21:50:26 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 66364 invoked by uid 99); 5 Sep 2017 21:50:25 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Sep 2017 21:50:25 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 547F3183232 for ; Tue, 5 Sep 2017 21:50:25 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.02 X-Spam-Level: X-Spam-Status: No, score=-4.02 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id xtACEdWFh92F for ; Tue, 5 Sep 2017 21:50:19 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with SMTP id 0E70561273 for ; Tue, 5 Sep 2017 21:50:18 +0000 (UTC) Received: (qmail 65416 invoked by uid 99); 5 Sep 2017 21:50:18 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Sep 2017 21:50:18 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 78E72F568C; Tue, 5 Sep 2017 21:50:18 +0000 (UTC) From: fhueske To: issues@flink.incubator.apache.org Reply-To: issues@flink.incubator.apache.org References: In-Reply-To: Subject: [GitHub] flink pull request #4625: [FLINK-6233] [table] Support time-bounded stream i... Content-Type: text/plain Message-Id: <20170905215018.78E72F568C@git1-us-west.apache.org> Date: Tue, 5 Sep 2017 21:50:18 +0000 (UTC) archived-at: Tue, 05 Sep 2017 21:50:28 -0000 Github user fhueske commented on a diff in the pull request: https://github.com/apache/flink/pull/4625#discussion_r137100457 --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/join/TimeBoundedStreamInnerJoin.scala --- @@ -0,0 +1,533 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.runtime.join + +import java.text.SimpleDateFormat +import java.util +import java.util.Map.Entry +import java.util.{Date, List => JList} + +import org.apache.flink.api.common.functions.FlatJoinFunction +import org.apache.flink.api.common.state._ +import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation} +import org.apache.flink.api.java.typeutils.ListTypeInfo +import org.apache.flink.configuration.Configuration +import org.apache.flink.streaming.api.functions.co.CoProcessFunction +import org.apache.flink.table.codegen.Compiler +import org.apache.flink.table.runtime.CRowWrappingCollector +import org.apache.flink.table.runtime.join.JoinTimeIndicator.JoinTimeIndicator +import org.apache.flink.table.runtime.types.CRow +import org.apache.flink.table.util.Logging +import org.apache.flink.types.Row +import org.apache.flink.util.Collector + +/** + * A CoProcessFunction to execute time-bounded stream inner-join. + * + * Sample criteria: + * + * L.time between R.time + X and R.time + Y + * or AND R.time between L.time - Y and L.time - X + * + * @param leftLowerBound X + * @param leftUpperBound Y + * @param allowedLateness the lateness allowed for the two streams + * @param leftType the input type of left stream + * @param rightType the input type of right stream + * @param genJoinFuncName the function code of other non-equi conditions + * @param genJoinFuncCode the function name of other non-equi conditions + * @param timeIndicator indicate whether joining on proctime or rowtime + * + */ +class TimeBoundedStreamInnerJoin( + private val leftLowerBound: Long, + private val leftUpperBound: Long, + private val allowedLateness: Long, + private val leftType: TypeInformation[Row], + private val rightType: TypeInformation[Row], + private val genJoinFuncName: String, + private val genJoinFuncCode: String, + private val leftTimeIdx: Int, + private val rightTimeIdx: Int, + private val timeIndicator: JoinTimeIndicator) + extends CoProcessFunction[CRow, CRow, CRow] + with Compiler[FlatJoinFunction[Row, Row, Row]] + with Logging { + + private var cRowWrapper: CRowWrappingCollector = _ + + // the join function for other conditions + private var joinFunction: FlatJoinFunction[Row, Row, Row] = _ + + // cache to store the left stream records + private var leftCache: MapState[Long, JList[Row]] = _ + // cache to store right stream records + private var rightCache: MapState[Long, JList[Row]] = _ + + // state to record the timer on the left stream. 0 means no timer set + private var leftTimerState: ValueState[Long] = _ + // state to record the timer on the right stream. 0 means no timer set + private var rightTimerState: ValueState[Long] = _ + + private val leftRelativeSize: Long = -leftLowerBound + private val rightRelativeSize: Long = leftUpperBound + + private val relativeWindowSize = rightRelativeSize + leftRelativeSize + + private var leftOperatorTime: Long = 0L + private var rightOperatorTime: Long = 0L + + private var backPressureSuggestion: Long = 0L + + if (relativeWindowSize <= 0) { + LOG.warn("The relative window size is non-positive, please check the join conditions.") + } + + if (allowedLateness < 0) { + throw new IllegalArgumentException("The allowed lateness must be non-negative.") + } + + + /** + * For holding back watermarks. + * + * @return the maximum delay for the outputs + */ + def getMaxOutputDelay = Math.max(leftRelativeSize, rightRelativeSize) + allowedLateness; + + /** + * For dynamic query optimization. + * + * @return the suggested offset time for back-pressure + */ + def getBackPressureSuggestion = backPressureSuggestion + + override def open(config: Configuration) { + val clazz = compile( + getRuntimeContext.getUserCodeClassLoader, + genJoinFuncName, + genJoinFuncCode) + joinFunction = clazz.newInstance() + + cRowWrapper = new CRowWrappingCollector() + cRowWrapper.setChange(true) + + // Initialize the data caches. + val leftListTypeInfo: TypeInformation[JList[Row]] = new ListTypeInfo[Row](leftType) + val leftStateDescriptor: MapStateDescriptor[Long, JList[Row]] = + new MapStateDescriptor[Long, JList[Row]](timeIndicator + "InnerJoinLeftCache", + BasicTypeInfo.LONG_TYPE_INFO.asInstanceOf[TypeInformation[Long]], leftListTypeInfo) + leftCache = getRuntimeContext.getMapState(leftStateDescriptor) + + val rightListTypeInfo: TypeInformation[JList[Row]] = new ListTypeInfo[Row](rightType) + val rightStateDescriptor: MapStateDescriptor[Long, JList[Row]] = + new MapStateDescriptor[Long, JList[Row]](timeIndicator + "InnerJoinRightCache", + BasicTypeInfo.LONG_TYPE_INFO.asInstanceOf[TypeInformation[Long]], rightListTypeInfo) + rightCache = getRuntimeContext.getMapState(rightStateDescriptor) + + // Initialize the timer states. + val leftTimerStateDesc: ValueStateDescriptor[Long] = + new ValueStateDescriptor[Long](timeIndicator + "InnerJoinLeftTimerState", + classOf[Long]) + leftTimerState = getRuntimeContext.getState(leftTimerStateDesc) + + val rightTimerStateDesc: ValueStateDescriptor[Long] = + new ValueStateDescriptor[Long](timeIndicator + "InnerJoinRightTimerState", + classOf[Long]) + rightTimerState = getRuntimeContext.getState(rightTimerStateDesc) + } + + /** + * Process records from the left stream. + * + * @param cRowValue the input record + * @param ctx the context to register timer or get current time + * @param out the collector for outputting results + * + */ + override def processElement1( + cRowValue: CRow, + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + out: Collector[CRow]): Unit = { + val timeForRecord: Long = getTimeForRecord(ctx, cRowValue, true) + getCurrentOperatorTime(ctx) + processElement( + cRowValue, + timeForRecord, + ctx, + out, + leftOperatorTime, + rightOperatorTime, + rightTimerState, + leftCache, + rightCache, + true + ) + } + + /** + * Process records from the right stream. + * + * @param cRowValue the input record + * @param ctx the context to get current time + * @param out the collector for outputting results + * + */ + override def processElement2( + cRowValue: CRow, + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + out: Collector[CRow]): Unit = { + val timeForRecord: Long = getTimeForRecord(ctx, cRowValue, false) + getCurrentOperatorTime(ctx) + processElement( + cRowValue, + timeForRecord, + ctx, + out, + rightOperatorTime, + leftOperatorTime, + leftTimerState, + rightCache, + leftCache, + false + ) + } + + /** + * Put a record from the input stream into the cache and iterate the opposite cache to + * output records meeting the join conditions. If there is no timer set for the OPPOSITE + * STREAM, register one. + */ + private def processElement( + cRowValue: CRow, + timeForRecord: Long, + ctx: CoProcessFunction[CRow, CRow, CRow]#Context, + out: Collector[CRow], + myWatermark: Long, + oppositeWatermark: Long, + oppositeTimeState: ValueState[Long], + recordListCache: MapState[Long, JList[Row]], + oppositeCache: MapState[Long, JList[Row]], + leftRecord: Boolean): Unit = { + if (relativeWindowSize > 0) { + //TODO Shall we consider adding a method for initialization with the context and collector? + cRowWrapper.out = out + + val record = cRowValue.row + + //TODO Only if the time of the record is greater than the watermark, can we continue. + if (timeForRecord >= myWatermark - allowedLateness) { + val oppositeLowerBound: Long = + if (leftRecord) timeForRecord - rightRelativeSize else timeForRecord - leftRelativeSize + + val oppositeUpperBound: Long = + if (leftRecord) timeForRecord + leftRelativeSize else timeForRecord + rightRelativeSize + + // Put the record into the cache for later use. + val recordList = if (recordListCache.contains(timeForRecord)) { --- End diff -- we should directly call `get()` and check for a `null` return value. `contains()` and `get()` would result in two RocksDB accesses of which one can be avoided. ---