hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chengxiang Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-8993) Make sure Spark + HS2 work [Spark Branch]
Date Fri, 12 Dec 2014 09:57:13 GMT

    [ https://issues.apache.org/jira/browse/HIVE-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243929#comment-14243929
] 

Chengxiang Li commented on HIVE-8993:
-------------------------------------

During the test of multi beelines connect to HS2, the second beeline would hangs while execute
query, due to can not launch executors.
In standalone mode, Spark Application(aka per SparkContext) apply its own executors from Spark
Master, which schedule executors by allocating work resources to new executor, the resource
contains memory and cpu cores. executor cores is set with "spark.cores.max", with default
value "spark.deploy.defaultCores" which is Integer.MaxValue in default. If Hive do not set
"spark.cores.max", Master would assign all worker cores to the first one who apply executors,
which make later Spark Applications never have a chance to launch executors until first one
quit. To enable multi users with HS2, user have to set "spark.cores.max" or "spark.deploy.defaultCores"
properly.

> Make sure Spark + HS2 work [Spark Branch]
> -----------------------------------------
>
>                 Key: HIVE-8993
>                 URL: https://issues.apache.org/jira/browse/HIVE-8993
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chengxiang Li
>              Labels: TODOC-SPARK
>             Fix For: spark-branch
>
>         Attachments: HIVE-8993.1-spark.patch, HIVE-8993.2-spark.patch, HIVE-8993.3-spark.patch
>
>
> We haven't formally tested this combination yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message