hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-16459) Cancel outstanding RPCs when channel closes
Date Wed, 19 Apr 2017 10:14:41 GMT

    [ https://issues.apache.org/jira/browse/HIVE-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974416#comment-15974416
] 

Hive QA commented on HIVE-16459:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12863982/HIVE-16459.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10567 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestSparkCliDriver.org.apache.hadoop.hive.cli.TestSparkCliDriver
(batchId=99)
org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning (batchId=284)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4756/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4756/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4756/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12863982 - PreCommit-HIVE-Build

> Cancel outstanding RPCs when channel closes
> -------------------------------------------
>
>                 Key: HIVE-16459
>                 URL: https://issues.apache.org/jira/browse/HIVE-16459
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>         Attachments: HIVE-16459.1.patch, HIVE-16459.2.patch, HIVE-16459.2.patch
>
>
> In SparkTask, we try to get job infos after the query finishes. Assume the job finishes
due to remote side crashes and thus closes the RPC. There's a race condition: if we try to
get job info before we notice the RPC is closed, the SparkTask waits for {{hive.spark.client.future.timeout}}
(default 60s) before it returns, even though we already know the job has failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message