spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kayousterhout <>
Subject [GitHub] spark issue #17297: [SPARK-14649][CORE] DagScheduler should not run duplicat...
Date Wed, 15 Mar 2017 18:13:33 GMT
Github user kayousterhout commented on the issue:
    @sitalkedia I won't have time to review this in detail for at least a few weeks, just
so you know (although others may have time to review / merge it).
    At a very high level, I'm concerned  about the amount of complexity that this adds to
the scheduler code.  We've recently had to deal with a number of subtle bugs with jobs hanging
or Spark crashing as a result of trying to handle map output from old tasks.  As a result,
I'm hesitant to add more complexity -- and the associated risk of bugs that cause job failures
+ expense of maintaining the code -- to improve performance.
    At the point I'd lean towards cancelling outstanding map tasks when a fetch failure occurs
(there's currently a TODO in the code to do this) to simplify these issues.  This would improve
performance in some ways, by freeing up slots that could be used for something else, at the
expense of wasted work if the tasks have already made significant progress.  But it would
significantly simplify the scheduler code, which given the debugging + reviewer time that
has gone into fixing subtle issues with this code path, I think is worthwhile.
    Curious what other folks think here.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message