spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From shubhamchopra <...@git.apache.org>
Subject [GitHub] spark pull request #17325: [SPARK-19803][CORE][TEST] Proactive replication t...
Date Mon, 27 Mar 2017 19:43:39 GMT
Github user shubhamchopra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17325#discussion_r108261401
  
    --- Diff: core/src/test/scala/org/apache/spark/storage/BlockManagerReplicationSuite.scala
---
    @@ -481,27 +481,39 @@ class BlockManagerProactiveReplicationSuite extends BlockManagerReplicationBehav
         assert(blockLocations.size === replicationFactor)
     
         // remove a random blockManager
    -    val executorsToRemove = blockLocations.take(replicationFactor - 1)
    +    val executorsToRemove = blockLocations.take(replicationFactor - 1).toSet
         logInfo(s"Removing $executorsToRemove")
    -    executorsToRemove.foreach{exec =>
    -      master.removeExecutor(exec.executorId)
    +    initialStores.filter(bm => executorsToRemove.contains(bm.blockManagerId)).foreach
{ bm =>
    +      master.removeExecutor(bm.blockManagerId.executorId)
    +      bm.stop()
           // giving enough time for replication to happen and new block be reported to master
    -      Thread.sleep(200)
    +      eventually(timeout(5 seconds), interval(100 millis)) {
    +        val newLocations = master.getLocations(blockId).toSet
    +        assert(newLocations.size === replicationFactor)
    +      }
         }
     
    -    val newLocations = eventually(timeout(5 seconds), interval(10 millis)) {
    +    val newLocations = eventually(timeout(5 seconds), interval(100 millis)) {
           val _newLocations = master.getLocations(blockId).toSet
           assert(_newLocations.size === replicationFactor)
           _newLocations
         }
         logInfo(s"New locations : $newLocations")
    -    // there should only be one common block manager between initial and new locations
    -    assert(newLocations.intersect(blockLocations.toSet).size === 1)
     
    -    // check if all the read locks have been released
    +    // new locations should not contain stopped block managers
    +    assert(newLocations.forall(bmId => !executorsToRemove.contains(bmId)),
    +      "New locations contain stopped block managers.")
    +
    +    // this is to ensure the last read lock gets released before we try to
    +    // check for read-locks. The check for read-locks using the method below is not
    +    // idempotent, and therefore can't be used in an `eventually` block.
    +    Thread.sleep(500)
    --- End diff --
    
    Got it. Apologize for the confusion. I have merged the PR. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message