spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "hanhonggen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-5001) BlockRDD removed unreasonablly in streaming
Date Sun, 04 Jan 2015 01:39:34 GMT

    [ https://issues.apache.org/jira/browse/SPARK-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14263700#comment-14263700
] 

hanhonggen commented on SPARK-5001:
-----------------------------------

I think it's impossible to make sure that all jobs generated in spark streaming will finish
orderly. My patch may be not a proper way. But the current logic of Spark Streaming is too
rough.

> BlockRDD removed unreasonablly in streaming
> -------------------------------------------
>
>                 Key: SPARK-5001
>                 URL: https://issues.apache.org/jira/browse/SPARK-5001
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.0.2, 1.1.1, 1.2.0
>            Reporter: hanhonggen
>         Attachments: fix_bug_BlockRDD_removed_not_reasonablly_in_streaming.patch
>
>
> I've counted messages using kafkainputstream of spark-1.1.1. The test app failed when
the latter batch job completed sooner than the previous. In the source code, BlockRDDs older
than (time-rememberDuration) will be removed in cleanMetaData after one job completed. And
the previous job will abort due to block not found.The relevant log are as follows:
> 2014-12-25 14:07:12(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-14] INFO
:Starting job streaming job 1419487632000 ms.0 from job set of time 1419487632000 ms
> 2014-12-25 14:07:15(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-14] INFO
:Starting job streaming job 1419487635000 ms.0 from job set of time 1419487635000 ms
> 2014-12-25 14:07:15(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-15] INFO
:Finished job streaming job 1419487635000 ms.0 from job set of time 1419487635000 ms
> 2014-12-25 14:07:15(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-16] INFO
:Removing blocks of RDD BlockRDD[3028] at createStream at TestKafka.java:144 of time 1419487635000
ms from DStream clearMetadata
> java.lang.Exception: Could not compute split, block input-0-1419487631400 not found for
3028



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message