Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 5AB6B200C3E for ; Tue, 14 Feb 2017 03:20:49 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 596B1160B60; Tue, 14 Feb 2017 02:20:49 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 32993160B72 for ; Tue, 14 Feb 2017 03:20:48 +0100 (CET) Received: (qmail 65103 invoked by uid 500); 14 Feb 2017 02:20:47 -0000 Mailing-List: contact issues-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list issues@hive.apache.org Received: (qmail 65093 invoked by uid 99); 14 Feb 2017 02:20:47 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Feb 2017 02:20:47 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id E793FC12D8 for ; Tue, 14 Feb 2017 02:20:46 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.199 X-Spam-Level: X-Spam-Status: No, score=-1.199 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id b81eRuCrfkYo for ; Tue, 14 Feb 2017 02:20:45 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 669D35FC7F for ; Tue, 14 Feb 2017 02:20:44 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 000D3E0655 for ; Tue, 14 Feb 2017 02:20:42 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 03BC621D79 for ; Tue, 14 Feb 2017 02:20:42 +0000 (UTC) Date: Tue, 14 Feb 2017 02:20:42 +0000 (UTC) From: "KaiXu (JIRA)" To: issues@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-15887) could not get APP ID and cause failed to connect to spark driver on yarn-client mode MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 14 Feb 2017 02:20:49 -0000 [ https://issues.apache.org/jira/browse/HIVE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15864894#comment-15864894 ] KaiXu commented on HIVE-15887: ------------------------------ this issue occurred again yesterday, to be notable, this issue occurred when dynamic allocation is as default(disabled). > could not get APP ID and cause failed to connect to spark driver on yarn-client mode > ------------------------------------------------------------------------------------ > > Key: HIVE-15887 > URL: https://issues.apache.org/jira/browse/HIVE-15887 > Project: Hive > Issue Type: Bug > Components: Hive, Spark > Affects Versions: 2.2.0 > Environment: Hive2.2 > Spark2.0.2 > hadoop2.7.1 > Reporter: KaiXu > > when I run Hive queries on Spark, got below error in the console, after check the container's log, found it failed to connected to spark driver. I have set hive.spark.job.monitor.timeout=3600s, so the log said 'Job hasn't been submitted after 3601s', actually during this long-time period it's impossible no available resource, and also did not see any issue related to the network, so the cause is not clear from the message "Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc.". > From Hive's log, failed to get APP ID, so this might be the cause why the driver did not start up. > console log: > Starting Spark Job = e9ce42c8-ff20-4ac8-803f-7668678c2a00 > Job hasn't been submitted after 3601s. Aborting it. > Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc. > Please check YARN or Spark driver's logs for further information. > Status: SENT > FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask > container's log: > 17/02/13 05:05:54 INFO yarn.ApplicationMaster: Preparing Local resources > 17/02/13 05:05:54 INFO yarn.ApplicationMaster: Prepared Local resources Map(__spark_libs__ -> resource { scheme: "hdfs" host: "hsx-node1" port: 8020 file: "/user/root/.sparkStaging/application_1486905599813_0046/__spark_libs__6842484649003444330.zip" } size: 153484072 timestamp: 1486926551130 type: ARCHIVE visibility: PRIVATE, __spark_conf__ -> resource { scheme: "hdfs" host: "hsx-node1" port: 8020 file: "/user/root/.sparkStaging/application_1486905599813_0046/__spark_conf__.zip" } size: 116245 timestamp: 1486926551318 type: ARCHIVE visibility: PRIVATE) > 17/02/13 05:05:54 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1486905599813_0046_000002 > 17/02/13 05:05:54 INFO spark.SecurityManager: Changing view acls to: root > 17/02/13 05:05:54 INFO spark.SecurityManager: Changing modify acls to: root > 17/02/13 05:05:54 INFO spark.SecurityManager: Changing view acls groups to: > 17/02/13 05:05:54 INFO spark.SecurityManager: Changing modify acls groups to: > 17/02/13 05:05:54 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() > 17/02/13 05:05:54 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable. > 17/02/13 05:05:54 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:54 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:54 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:55 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:56 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:57 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:05:57 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:07:34 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:07:34 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:07:34 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:07:34 ERROR yarn.ApplicationMaster: Failed to connect to driver at 192.168.1.1:43656, retrying ... > 17/02/13 05:07:34 ERROR yarn.ApplicationMaster: Uncaught exception: > org.apache.spark.SparkException: Failed to connect to driver! > at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:569) > at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:405) > at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:247) > at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:749) > at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:71) > at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:70) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:70) > at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:747) > at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:774) > at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala) > 17/02/13 05:07:34 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!) > 17/02/13 05:07:34 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!) > 17/02/13 05:07:34 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://hsx-node1:8020/user/root/.sparkStaging/application_1486905599813_0046 > 17/02/13 05:07:34 INFO util.ShutdownHookManager: Shutdown hook called > hive's log: > 2017-02-13T03:10:01,639 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 03:10:01 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T03:10:06,640 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 03:10:06 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T03:10:08,176 WARN [c807cf48-301a-47b4-96df-495b2827d6ba main] impl.RemoteSparkJobStatus: Failed to get APP ID. > java.util.concurrent.TimeoutException > at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49) ~[netty-all-4.0.29.Final.jar:4.0.29.Final] > at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getAppID(RemoteSparkJobStatus.java:65) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:114) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2168) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1824) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1511) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1222) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1212) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:430) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:446) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:749) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:715) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:642) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_60] > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_60] > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_60] > at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_60] > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) ~[hadoop-common-2.7.1.jar:?] > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) ~[hadoop-common-2.7.1.jar:?] > 2017-02-13T03:10:11,641 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 03:10:11 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T03:10:16,643 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 03:10:16 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T04:11:07,354 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 04:11:07 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T04:11:09,706 WARN [c807cf48-301a-47b4-96df-495b2827d6ba main] impl.RemoteSparkJobStatus: Failed to get APP ID. > java.util.concurrent.TimeoutException > at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:49) ~[netty-all-4.0.29.Final.jar:4.0.29.Final] > at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobStatus.getAppID(RemoteSparkJobStatus.java:65) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:132) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2168) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1824) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1511) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1222) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1212) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:400) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:430) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:446) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:749) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:715) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:642) ~[hive-cli-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_60] > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_60] > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_60] > at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_60] > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) ~[hadoop-common-2.7.1.jar:?] > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) ~[hadoop-common-2.7.1.jar:?] > 2017-02-13T04:11:09,719 ERROR [c807cf48-301a-47b4-96df-495b2827d6ba main] ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask > 2017-02-13T04:11:09,720 INFO [c807cf48-301a-47b4-96df-495b2827d6ba main] ql.Driver: Completed executing command(queryId=root_20170213030905_947e7eca-a134-4652-883c-10bded3c6218); Time taken: 3723.688 seconds > 2017-02-13T04:11:09,800 INFO [c807cf48-301a-47b4-96df-495b2827d6ba main] conf.HiveConf: Using the default value passed in for log id: c807cf48-301a-47b4-96df-495b2827d6ba > 2017-02-13T04:11:09,800 INFO [c807cf48-301a-47b4-96df-495b2827d6ba main] session.SessionState: Resetting thread name to main > 2017-02-13T04:11:09,800 INFO [main] conf.HiveConf: Using the default value passed in for log id: c807cf48-301a-47b4-96df-495b2827d6ba > 2017-02-13T04:11:12,355 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 04:11:12 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T04:11:17,356 INFO [stderr-redir-1] client.SparkClientImpl: 17/02/13 04:11:17 INFO yarn.Client: Application report for application_1486905599813_0046 (state: ACCEPTED) > 2017-02-13T04:11:19,811 WARN [c807cf48-301a-47b4-96df-495b2827d6ba main] client.SparkClientImpl: Timed out shutting down remote driver, interrupting... > 2017-02-13T04:11:19,811 WARN [Driver] client.SparkClientImpl: Waiting thread interrupted, killing child process. > 2017-02-13T04:11:19,839 INFO [main] session.SessionState: Deleted directory: /tmp/hive/root/c807cf48-301a-47b4-96df-495b2827d6ba on fs with scheme hdfs > 2017-02-13T04:11:19,839 INFO [main] session.SessionState: Deleted directory: /tmp/root/c807cf48-301a-47b4-96df-495b2827d6ba on fs with scheme file -- This message was sent by Atlassian JIRA (v6.3.15#6346)