hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jimmy Xiang (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HIVE-9258) Explain query should share the same Spark application with regular queries [Spark Branch]
Date Tue, 13 Jan 2015 02:44:35 GMT

     [ https://issues.apache.org/jira/browse/HIVE-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jimmy Xiang resolved HIVE-9258.
-------------------------------
    Resolution: Not a Problem

Closed it as Not a Problem. Thanks. As to sparkMemoryAndCores in SetSparkReducerParallelism,
it is used only during one query. In one query, there could be several reducers so it's better
cached. Since we create a SetSparkReducerParallelism instance per query, so it should not
be cached in the entire user session, I think.

> Explain query should share the same Spark application with regular queries [Spark Branch]
> -----------------------------------------------------------------------------------------
>
>                 Key: HIVE-9258
>                 URL: https://issues.apache.org/jira/browse/HIVE-9258
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Jimmy Xiang
>
> Currently for Hive on Spark, query plan includes the number of reducers, which is determined
partly by the Spark cluster. Thus, explain query will need to launch a Spark application (Spark
remote context), which should be shared with regular queries so that we don't launch additional
Spark remote context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message