flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3225) Optimize logical Table API plans in Calcite
Date Fri, 29 Jan 2016 09:37:39 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123253#comment-15123253
] 

ASF GitHub Bot commented on FLINK-3225:
---------------------------------------

Github user twalthr commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1559#discussion_r51241130
  
    --- Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/api/java/table/JavaBatchTranslator.scala
---
    @@ -41,21 +44,13 @@ class JavaBatchTranslator extends PlanTranslator {
     
         // create table representation from DataSet
         val dataSetTable = new DataSetTable[A](
    -    repr.asInstanceOf[JavaDataSet[A]],
    -    fieldNames
    +      repr.asInstanceOf[JavaDataSet[A]],
    +      fieldNames
         )
    -
    -    // register table in Cascading schema
    -    val schema = Frameworks.createRootSchema(true)
         val tableName = repr.hashCode().toString
    -    schema.add(tableName, dataSetTable)
     
    -    // initialize RelBuilder
    -    val frameworkConfig = Frameworks
    -      .newConfigBuilder
    -      .defaultSchema(schema)
    -      .build
    -    val relBuilder = RelBuilder.create(frameworkConfig)
    +    TranslationContext.addDataSet(tableName, dataSetTable)
    --- End diff --
    
    Do we really want to have this static? What happens if we use multipleTableEnvironments
in our program? They shouldn't influence each other. Is the hash code really unique? In the
old Table API we had a AtomicCounter that guaranted uniqueness.


> Optimize logical Table API plans in Calcite
> -------------------------------------------
>
>                 Key: FLINK-3225
>                 URL: https://issues.apache.org/jira/browse/FLINK-3225
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API
>            Reporter: Fabian Hueske
>            Assignee: Fabian Hueske
>
> This task implements the optimization of logical Table API plans with Apache Calcite.
The input of the optimization process is a logical query plan consisting of Calcite RelNodes.
FLINK-3223 translates Table API queries into this representation.
> The result of this issue is an optimized logical plan.
> Calcite's rule-based optimizer applies query rewriting and optimization rules. For Batch
SQL, we can use (a subset of) Calcite’s default optimization rules. 
> For this issue we have to 
> - add the Calcite optimizer to the translation process
> - select an appropriate set of batch optimization rules from Calcite’s default rules.
We can reuse the rules selected by Timo’s first SQL implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message