flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From fhueske <...@git.apache.org>
Subject [GitHub] flink pull request: [FLINK-3596] DataSet RelNode refactoring
Date Wed, 09 Mar 2016 16:27:20 GMT
Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1777#discussion_r55544962
  
    --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/nodes/dataset/DataSetAggregate.scala
---
    @@ -69,37 +72,55 @@ class DataSetGroupReduce(
     
         expectedType match {
           case Some(typeInfo) if typeInfo.getTypeClass != classOf[Row] =>
    -        throw new PlanGenException("GroupReduce operations currently only support returning
Rows.")
    +        throw new PlanGenException("Aggregate operations currently only support returning
Rows.")
           case _ => // ok
         }
     
    +    val groupingKeys = (0 until grouping.length).toArray
    +    // add grouping fields, position keys in the input, and input type
    +    val aggregateResult = AggregateUtil.createOperatorFunctionsForAggregates(namedAggregates,
    +      inputType, rowType, grouping)
    +
         val inputDS = input.asInstanceOf[DataSetRel].translateToPlan(
           config,
           // tell the input operator that this operator currently only supports Rows as input
           Some(TypeConverter.DEFAULT_ROW_TYPE))
     
    +    val intermediateType = determineReturnType(
    +      aggregateResult.intermediateDataType,
    +      expectedType,
    +      config.getNullCheck,
    +      config.getEfficientTypeUsage)
    +
    +
         // get the output types
    -    val fieldsNames = rowType.getFieldNames
         val fieldTypes: Array[TypeInformation[_]] = rowType.getFieldList.asScala
         .map(f => f.getType.getSqlTypeName)
         .map(n => TypeConverter.sqlTypeToTypeInfo(n))
         .toArray
     
         val rowTypeInfo = new RowTypeInfo(fieldTypes)
    +
    +    val mappedInput = inputDS.map(aggregateResult.mapFunc.apply(
    --- End diff --
    
    We could refactor the `AggregateUtil` such that it returns the `MapFunction` directly
and not a function to generate the `MapFunction`. I think would not need to expose the `intermediateDataType`
of the `aggregateResult`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message