flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From xccui <...@git.apache.org>
Subject [GitHub] flink pull request #4532: [FLINK-7337] [table] Refactor internal handling of...
Date Sat, 12 Aug 2017 15:55:26 GMT
Github user xccui commented on a diff in the pull request:

    --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/StreamTableEnvironment.scala
    @@ -667,30 +719,62 @@ abstract class StreamTableEnvironment(
         // get CRow plan
         val plan: DataStream[CRow] = translateToCRow(logicalPlan, queryConfig)
    +    val rowtimeFields = logicalType
    +      .getFieldList.asScala
    +      .filter(f => FlinkTypeFactory.isRowtimeIndicatorType(f.getType))
    +    // convert the input type for the conversion mapper
    +    // the input will be changed in the OutputRowtimeProcessFunction later
    +    val convType = if (rowtimeFields.size > 1) {
    +      throw new TableException(
    --- End diff --
    I got an idea, but not sure if it's applicable. We allow multiple rowtime fields in a
stream but only activate one in an operator. Since the timestamps are stored in records, the
other inactive rowtime fields can just be taken as common fields. Any changes on the rowtime
fields will render them invalid for rowtime use. IMO, there are not too many queries (maybe
only over aggregate and join) depending on the rowtime, thus the optimizer may be able to
deduce which rowtime field should be activated in an operator. However, some existing logics
may be affected by that.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message