Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7986311571 for ; Mon, 21 Jul 2014 07:13:39 +0000 (UTC) Received: (qmail 17003 invoked by uid 500); 21 Jul 2014 07:13:38 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 16924 invoked by uid 500); 21 Jul 2014 07:13:38 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 16907 invoked by uid 500); 21 Jul 2014 07:13:38 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 16903 invoked by uid 99); 21 Jul 2014 07:13:38 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 Jul 2014 07:13:38 +0000 Date: Mon, 21 Jul 2014 07:13:38 +0000 (UTC) From: "Chengxiang Li (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HIVE-7436) Load Spark configuration into Hive driver MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li updated HIVE-7436: -------------------------------- Attachment: HIVE-7436-Spark.2.patch update patch with default spark.master and spark.app.name. > Load Spark configuration into Hive driver > ----------------------------------------- > > Key: HIVE-7436 > URL: https://issues.apache.org/jira/browse/HIVE-7436 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Chengxiang Li > Assignee: Chengxiang Li > Attachments: HIVE-7436-Spark.1.patch, HIVE-7436-Spark.2.patch > > > load Spark configuration into Hive driver, there are 3 ways to setup spark configurations: > # Configure properties in spark configuration file(spark-defaults.conf). > # Java property. > # System environment. > Spark support configuration through system environment just for compatible with previous scripts, we won't support in Hive on Spark. Hive on Spark load defaults from java properties, then load properties from configuration file, and override existed properties. > configuration steps: > # Create spark-defaults.conf, and place it in the /etc/spark/conf configuration directory. > please refer to [http://spark.apache.org/docs/latest/configuration.html] for configuration of spark-defaults.conf. > # Create the $SPARK_CONF_DIR environment variable and set it to the location of spark-defaults.conf. > export SPARK_CONF_DIR=/etc/spark/conf > # Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable. > export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH > NO PRECOMMIT TESTS. This is for spark-branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)