hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <>
Subject [jira] [Commented] (HIVE-8548) Integrate with remote Spark context after HIVE-8528 [Spark Branch]
Date Thu, 06 Nov 2014 14:46:33 GMT


Xuefu Zhang commented on HIVE-8548:

Hi [~chengxiang li], I think nobody is going to deploy HS2 in production with local mode and
HS2 embedded mode (embedded in Beeline) should behave like Hive CLI. Thus, I think it might
be better to keep them consistent. Based on this, I think "local" should be the default whether
it's Hive CLI or HS2, and they actually share the same code path. In addition, "local" should
refer to local spark context in both cases. As to the concurrentcy problem, we just need some
proper documentation. Remote spark context should be used when {{spark.master != local}}.
I think his approach makes the implemention simpler with seemingly better usability. We can
revist this at a later phase.

> Integrate with remote Spark context after HIVE-8528 [Spark Branch]
> ------------------------------------------------------------------
>                 Key: HIVE-8548
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chengxiang Li
> With HIVE-8528, HiverSever2 should use remote Spark context to submit job and monitor
progress, etc. This is necessary if Hive runs on standalone cluster, Yarn, or Mesos. If Hive
runs with spark.master=local, we should continue using SparkContext in current way.

This message was sent by Atlassian JIRA

View raw message