spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tdas <>
Subject [GitHub] spark pull request: [SPARK-10137][Streaming]Avoid to restart recei...
Date Mon, 24 Aug 2015 00:54:19 GMT
Github user tdas commented on a diff in the pull request:
    --- Diff: streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
    @@ -431,7 +450,8 @@ class ReceiverTracker(ssc: StreamingContext, skipReceiverLaunch: Boolean
= false
    -        updateReceiverScheduledExecutors(receiver.streamId, scheduledExecutors)
    --- End diff --
    @zsxwing explained me this offline. In case of resecheduling, the scheduled executors
are not stored so that the following scenario does not occur.
    - Initial globally-optimal schedule is stored, but one receiver gets launched incorrectly.

    - The receiver is rejected and therefore has to be rescheduled, but if the rescheduled
location (which is locally-optimal for that receiver) is saved, it will overwrite the original
globally optimal location, and will get launched somewhere else that does not ensure the proper
global balancing.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message