spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcelo Vanzin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-20658) spark.yarn.am.attemptFailuresValidityInterval doesn't seem to have an effect
Date Mon, 08 May 2017 21:55:04 GMT

    [ https://issues.apache.org/jira/browse/SPARK-20658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16001623#comment-16001623
] 

Marcelo Vanzin commented on SPARK-20658:
----------------------------------------

That does not say which package you used (i.e. which version of Hadoop is packaged with your
Spark build).

> spark.yarn.am.attemptFailuresValidityInterval doesn't seem to have an effect
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-20658
>                 URL: https://issues.apache.org/jira/browse/SPARK-20658
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 2.1.0
>            Reporter: Paul Jones
>            Priority: Minor
>
> I'm running a job in YARN cluster mode using `spark.yarn.am.attemptFailuresValidityInterval=1h`
specified in both spark-default.conf and in my spark-submit command. (This flag shows up in
the environment tab of spark history server, so it seems that it's specified correctly). 
> However, I just had a job die with with four AM failures (three of the four failures
were over an hour apart). So, I'm confused as to what could be going on. I haven't figured
out the cause of the individual failures, so is it possible that we always count certain types
of failures? E.g. jobs that are killed due to memory issues always count? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message