hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-7436) Load Spark configuration into Hive driver
Date Mon, 21 Jul 2014 04:55:38 GMT

    [ https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14068203#comment-14068203
] 

Xuefu Zhang commented on HIVE-7436:
-----------------------------------

[~chengxiang li] Thanks for working on this. With your proposal, I'm wondering if user can
run query on spark when spark-defaults.conf is missing from user's environment. Would default
configuration allow user to run spark, such as in local mode?

Second question: would user be able to set or change the spark configuration via hive's set
command? I guess not, but I'd like to hear your thought.

> Load Spark configuration into Hive driver
> -----------------------------------------
>
>                 Key: HIVE-7436
>                 URL: https://issues.apache.org/jira/browse/HIVE-7436
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Chengxiang Li
>            Assignee: Chengxiang Li
>         Attachments: HIVE-7436-Spark.1.patch
>
>
> load Spark configuration into Hive driver, there are 3 ways to setup spark configurations:
> #  Configure properties in spark configuration file(spark-defaults.conf).
> #  Java property.
> #  System environment.
> Spark support configuration through system environment just for compatible with previous
scripts, we won't support in Hive on Spark. Hive on Spark load defaults from java properties,
then load properties from configuration file, and override existed properties.
> configuration steps:
> # Create spark-defaults.conf, and place it in the /etc/spark/conf configuration directory.
>     please refer to [http://spark.apache.org/docs/latest/configuration.html] for configuration
of spark-defaults.conf.
> # Create the $SPARK_CONF_DIR environment variable and set it to the location of spark-defaults.conf.
>     export SPARK_CONF_DIR=/etc/spark/conf
> # Add $SAPRK_CONF_DIR to the $HADOOP_CLASSPATH environment variable.
>     export HADOOP_CLASSPATH=$SPARK_CONF_DIR:$HADOOP_CLASSPATH
> NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message