spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-13403) HiveConf used for SparkSQL is not based on the Hadoop configuration
Date Fri, 19 Feb 2016 17:38:18 GMT

    [ https://issues.apache.org/jira/browse/SPARK-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15154533#comment-15154533
] 

Apache Spark commented on SPARK-13403:
--------------------------------------

User 'rdblue' has created a pull request for this issue:
https://github.com/apache/spark/pull/11273

> HiveConf used for SparkSQL is not based on the Hadoop configuration
> -------------------------------------------------------------------
>
>                 Key: SPARK-13403
>                 URL: https://issues.apache.org/jira/browse/SPARK-13403
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Ryan Blue
>
> The HiveConf instances used by HiveContext are not instantiated by passing in the SparkContext's
Hadoop conf and are instead based only on the config files in the environment. Hadoop best
practice is to instantiate just one Configuration from the environment and then pass that
conf when instantiating others so that modifications aren't lost.
> Spark will set configuration variables that start with "spark.hadoop." from spark-defaults.conf
when creating {{sc.hadoopConfiguration}}, which are not correctly passed to the HiveConf because
of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message