spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raymond Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-768) Fail a task when the remote block it is fetching is not serializable
Date Fri, 20 Jun 2014 03:02:25 GMT

    [ https://issues.apache.org/jira/browse/SPARK-768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038307#comment-14038307
] 

Raymond Liu commented on SPARK-768:
-----------------------------------

And for case 2, the problem is that current code seems not make difference between the NonSerializableException
been thrown by fetch remote block during computation and the exception been thrown during
serialization of the task resut. it wll take it all as the task result is not serializable
and abort the whole taskset. Thus the job will fail in the end I think. Is this what you mean
hanging?

> Fail a task when the remote block it is fetching is not serializable
> --------------------------------------------------------------------
>
>                 Key: SPARK-768
>                 URL: https://issues.apache.org/jira/browse/SPARK-768
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Reynold Xin
>            Assignee: Reynold Xin
>
> When a task is fetching a remote block (e.g. locality wait exceeded), and if the block
is not serializable, the task would hang.
> The block manager should fail the task instead of hanging the task ... once the task
fails, eventually it will get scheduled to the local node to be executed successfully. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message