hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-9847) Hive should not allow additional attemps when RSC fails [Spark Branch]
Date Wed, 04 Mar 2015 02:42:04 GMT

    [ https://issues.apache.org/jira/browse/HIVE-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346275#comment-14346275
] 

Hive QA commented on HIVE-9847:
-------------------------------



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12702308/HIVE-9847.2-spark.patch

{color:green}SUCCESS:{color} +1 7567 tests passed

Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/756/testReport
Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/756/console
Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-756/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12702308 - PreCommit-HIVE-SPARK-Build

> Hive should not allow additional attemps when RSC fails [Spark Branch]
> ----------------------------------------------------------------------
>
>                 Key: HIVE-9847
>                 URL: https://issues.apache.org/jira/browse/HIVE-9847
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Jimmy Xiang
>            Assignee: Jimmy Xiang
>            Priority: Trivial
>             Fix For: spark-branch
>
>         Attachments: HIVE-9847.1-spark.patch, HIVE-9847.2-spark.patch
>
>
> In yarn-cluster mode, if RSC fails at the first time, yarn will restart it. HoS should
set "yarn.resourcemanager.am.max-attempts" to 1 to disallow such restarting when submitting
Spark jobs to Yarn in cluster mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message