Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 46EAE186ED for ; Fri, 19 Feb 2016 17:38:18 +0000 (UTC) Received: (qmail 61953 invoked by uid 500); 19 Feb 2016 17:38:18 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 61916 invoked by uid 500); 19 Feb 2016 17:38:18 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 61899 invoked by uid 99); 19 Feb 2016 17:38:18 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Feb 2016 17:38:18 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 14CCE2C14F2 for ; Fri, 19 Feb 2016 17:38:18 +0000 (UTC) Date: Fri, 19 Feb 2016 17:38:18 +0000 (UTC) From: "Apache Spark (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-13403) HiveConf used for SparkSQL is not based on the Hadoop configuration MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15154533#comment-15154533 ] Apache Spark commented on SPARK-13403: -------------------------------------- User 'rdblue' has created a pull request for this issue: https://github.com/apache/spark/pull/11273 > HiveConf used for SparkSQL is not based on the Hadoop configuration > ------------------------------------------------------------------- > > Key: SPARK-13403 > URL: https://issues.apache.org/jira/browse/SPARK-13403 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.6.0 > Reporter: Ryan Blue > > The HiveConf instances used by HiveContext are not instantiated by passing in the SparkContext's Hadoop conf and are instead based only on the config files in the environment. Hadoop best practice is to instantiate just one Configuration from the environment and then pass that conf when instantiating others so that modifications aren't lost. > Spark will set configuration variables that start with "spark.hadoop." from spark-defaults.conf when creating {{sc.hadoopConfiguration}}, which are not correctly passed to the HiveConf because of this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org