Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 230A317248 for ; Fri, 26 Jun 2015 10:14:07 +0000 (UTC) Received: (qmail 99320 invoked by uid 500); 26 Jun 2015 10:14:05 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 99230 invoked by uid 500); 26 Jun 2015 10:14:05 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 98960 invoked by uid 99); 26 Jun 2015 10:14:05 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 26 Jun 2015 10:14:05 +0000 Date: Fri, 26 Jun 2015 10:14:04 +0000 (UTC) From: "JoneZhang (JIRA)" To: dev@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (HIVE-11125) when i run a sql use hive on spark, it seem like the hive cli finished, but the application is always running MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 JoneZhang created HIVE-11125: -------------------------------- Summary: when i run a sql use hive on spark, it seem like the hive cli finished, but the application is always running Key: HIVE-11125 URL: https://issues.apache.org/jira/browse/HIVE-11125 Project: Hive Issue Type: Bug Components: spark-branch Affects Versions: 1.2.0 Environment: Hive1.2.0 Spark1.3.1 Hadoop2.5.1 Reporter: JoneZhang when i run a sql use hive on spark,. The hive cli has finished hive (default)> select count(id) from t1 where id>100; Query ID = mqq_20150626174732_9e18f0c9-7b56-46ab-bf90-3b66f1a51300 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Spark Job = 7d34cb8c-eaad-4724-a99a-37e517db80d9 Query Hive on Spark job[0] stages: 0 1 Status: Running (Hive on Spark job[0]) Job Progress Format CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost] 2015-06-26 17:47:53,746 Stage-0_0: 0(+1)/5 Stage-1_0: 0/1 2015-06-26 17:47:56,771 Stage-0_0: 1(+0)/5 Stage-1_0: 0/1 2015-06-26 17:47:57,778 Stage-0_0: 4(+1)/5 Stage-1_0: 0/1 2015-06-26 17:47:59,791 Stage-0_0: 5/5 Finished Stage-1_0: 0(+1)/1 2015-06-26 17:48:00,797 Stage-0_0: 5/5 Finished Stage-1_0: 1/1 Finished Status: Finished successfully in 18.08 seconds OK 5 Time taken: 28.512 seconds, Fetched: 1 row(s) But the application is always running state on resourcemanager User: mqq Name: Hive on Spark Application Type: SPARK Application Tags: State: RUNNING FinalStatus: UNDEFINED Started: 2015-06-26 17:47:38 Elapsed: 24mins, 33sec Tracking URL: ApplicationMaster Diagnostics: the hive.log is 2015-06-26 18:12:26,878 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/06/26 18:12:26 main INFO org.apache.spark.deploy.yarn.Client>> Application report for application_1433328839160_0071 (state: RUNNING) 2015-06-26 18:12:27,879 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/06/26 18:12:27 main INFO org.apache.spark.deploy.yarn.Client>> Application report for application_1433328839160_0071 (state: RUNNING) 2015-06-26 18:12:28,880 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/06/26 18:12:28 main INFO org.apache.spark.deploy.yarn.Client>> Application report for application_1433328839160_0071 (state: RUNNING) ... -- This message was sent by Atlassian JIRA (v6.3.4#6332)