hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-17835) HS2 Logs print unnecessary stack trace when HoS query is cancelled
Date Thu, 08 Feb 2018 21:19:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357591#comment-16357591
] 

Hive QA commented on HIVE-17835:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12909664/HIVE-17835.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 12995 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
(batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
(batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_opt_shuffle_serde]
(batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] (batchId=250)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
(batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.authorization.TestCLIAuthzSessionContext.testAuthzSessionContextContents
(batchId=238)
org.apache.hive.spark.client.rpc.TestRpc.testClientTimeout (batchId=297)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9100/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9100/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9100/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12909664 - PreCommit-HIVE-Build

> HS2 Logs print unnecessary stack trace when HoS query is cancelled
> ------------------------------------------------------------------
>
>                 Key: HIVE-17835
>                 URL: https://issues.apache.org/jira/browse/HIVE-17835
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>            Priority: Major
>         Attachments: HIVE-17835.1.patch, HIVE-17835.2.patch, HIVE-17835.3.patch
>
>
> Example:
> {code}
> 2017-10-05 17:47:11,881 ERROR org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor:
[HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with exception 'java.lang.InterruptedException(sleep
interrupted)'
> java.lang.InterruptedException: sleep interrupted
> 	at java.lang.Thread.sleep(Native Method)
> 	at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124)
> 	at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
> 	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052)
> 	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748)
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280)
> 	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
> 	at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
> 	at org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
> 	at org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> 2017-10-05 17:47:11,881 WARN  org.apache.hadoop.hive.ql.Driver: [HiveServer2-Handler-Pool:
Thread-105]: Shutting down task : Stage-2:MAPRED
> 2017-10-05 17:47:11,882 ERROR org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor:
[HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with exception 'java.lang.InterruptedException(sleep
interrupted)'
> java.lang.InterruptedException: sleep interrupted
> 	at java.lang.Thread.sleep(Native Method)
> 	at org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124)
> 	at org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
> 	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052)
> 	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748)
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280)
> 	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
> 	at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
> 	at org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
> 	at org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message