spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From vanzin <>
Subject [GitHub] spark pull request: [WIP]SPARK-2098: All Spark processes should su...
Date Tue, 08 Jul 2014 17:46:18 GMT
Github user vanzin commented on a diff in the pull request:
    --- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
    @@ -36,20 +38,27 @@ import scala.collection.mutable.HashMap
      * Note that once a SparkConf object is passed to Spark, it is cloned and can no longer
be modified
      * by the user. Spark does not support modifying the configuration at runtime.
    - * @param loadDefaults whether to also load values from Java system properties
    + * @param loadDefaults whether to also load values from Java system properties, file
and resource
    + * @param fileName load properties from file
    --- End diff --
    Also I don't see any code ever using the `resource` argument. Is it really needed? Unless
it's somehow hooked into SparkSubmit, I don't see it being very useful.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

View raw message