spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From marmbrus <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-2393][SQL] Cost estimation optimization...
Date Mon, 28 Jul 2014 21:28:00 GMT
Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1238#discussion_r15492206
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala ---
    @@ -39,29 +43,34 @@ trait SQLConf {
     
       /**
        * Upper bound on the sizes (in bytes) of the tables qualified for the auto conversion
to
    -   * a broadcast value during the physical executions of join operations.  Setting this
to 0
    +   * a broadcast value during the physical executions of join operations.  Setting this
to -1
        * effectively disables auto conversion.
    -   * Hive setting: hive.auto.convert.join.noconditionaltask.size.
    +   *
    +   * Hive setting: hive.auto.convert.join.noconditionaltask.size, whose default value
is also 10000.
        */
       private[spark] def autoConvertJoinSize: Int =
         get("spark.sql.auto.convert.join.size", "10000").toInt
    --- End diff --
    
    What do you think about avoiding `.`s when there isn't hierarchy.  So maybe `spark.sql.autoBroadcastJoinThreshold`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message