spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Imran Rashid <iras...@cloudera.com>
Subject Re: Spark runs into an Infinite loop even if the tasks are completed successfully
Date Wed, 12 Aug 2015 17:27:34 GMT
yikes.

Was this a one-time thing?  Or does it happen consistently?  can you turn
on debug logging for o.a.s.scheduler (dunno if it will help, but maybe ...)

On Tue, Aug 11, 2015 at 8:59 AM, Akhil Das <akhil@sigmoidanalytics.com>
wrote:

> Hi
>
> My Spark job (running in local[*] with spark 1.4.1) reads data from a
> thrift server(Created an RDD, it will compute the partitions in
> getPartitions() call and in computes hasNext will return records from these
> partitions), count(), foreach() is working fine it returns the correct
> number of records. But whenever there is shuffleMap stage (like reduceByKey
> etc.) then all the tasks are executing properly but it enters in an
> infinite loop saying :
>
>
>    1. 15/08/11 13:05:54 INFO DAGScheduler: Resubmitting ShuffleMapStage 1
>    (map at FilterMain.scala:59) because some of its tasks had failed: 0, 3
>
>
> Here's the complete stack-trace http://pastebin.com/hyK7cG8S
>
> What could be the root cause of this problem? I looked up and bumped into
> this closed JIRA <https://issues.apache.org/jira/browse/SPARK-583> (which
> is very very old)
>
>
>
>
> Thanks
> Best Regards
>

Mime
View raw message