hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <>
Subject [jira] [Commented] (HIVE-16459) Cancel outstanding RPCs when channel closes
Date Wed, 19 Apr 2017 10:14:41 GMT


Hive QA commented on HIVE-16459:

Here are the results of testing the latest attachment:

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10567 tests executed
*Failed tests:*
(batchId=99) (batchId=284)

Test results:
Console output:
Test logs:

Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed

This message is automatically generated.

ATTACHMENT ID: 12863982 - PreCommit-HIVE-Build

> Cancel outstanding RPCs when channel closes
> -------------------------------------------
>                 Key: HIVE-16459
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>         Attachments: HIVE-16459.1.patch, HIVE-16459.2.patch, HIVE-16459.2.patch
> In SparkTask, we try to get job infos after the query finishes. Assume the job finishes
due to remote side crashes and thus closes the RPC. There's a race condition: if we try to
get job info before we notice the RPC is closed, the SparkTask waits for {{hive.spark.client.future.timeout}}
(default 60s) before it returns, even though we already know the job has failed.

This message was sent by Atlassian JIRA

View raw message