Return-Path: X-Original-To: apmail-spark-reviews-archive@minotaur.apache.org Delivered-To: apmail-spark-reviews-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AED2418C37 for ; Mon, 1 Feb 2016 23:29:07 +0000 (UTC) Received: (qmail 33142 invoked by uid 500); 1 Feb 2016 23:28:55 -0000 Delivered-To: apmail-spark-reviews-archive@spark.apache.org Received: (qmail 33117 invoked by uid 500); 1 Feb 2016 23:28:55 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 33106 invoked by uid 99); 1 Feb 2016 23:28:54 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 01 Feb 2016 23:28:54 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id A4329DFDC7; Mon, 1 Feb 2016 23:28:54 +0000 (UTC) From: andrewor14 To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [SPARK-10618] [Mesos] Refactoring scheduling c... Content-Type: text/plain Message-Id: <20160201232854.A4329DFDC7@git1-us-west.apache.org> Date: Mon, 1 Feb 2016 23:28:54 +0000 (UTC) Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/10326#discussion_r51499069 --- Diff: core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackendSuite.scala --- @@ -184,4 +193,47 @@ class CoarseMesosSchedulerBackendSuite extends SparkFunSuite verify(driver, times(1)).reviveOffers() } + + test("isOfferSatisfiesRequirements return true when there is a valid offer") { + val schedulerBackend = createSchedulerBackendForGivenSparkConf(sc) + + assert(schedulerBackend.isOfferSatisfiesRequirements("Slave1", 10000, 5, sc)) + } + + + test("isOfferSatisfiesRequirements return false when memory in offer is less" + + " than required memory") { + val schedulerBackend = createSchedulerBackendForGivenSparkConf(sc) + + assert(schedulerBackend.isOfferSatisfiesRequirements("Slave1", 1, 5, sc) === false) + } + + test("isOfferSatisfiesRequirements return false when cpu in offer is less than required cpu") { + val schedulerBackend = createSchedulerBackendForGivenSparkConf(sc) + + assert(schedulerBackend.isOfferSatisfiesRequirements("Slave1", 10000, 0, sc) === false) + } + + test("isOfferSatisfiesRequirements return false when offer is from slave already running" + + " an executor") { + val schedulerBackend = createSchedulerBackendForGivenSparkConf(sc) + schedulerBackend.slaveIdsWithExecutors += "Slave2" + + assert(schedulerBackend.isOfferSatisfiesRequirements("Slave2", 10000, 5, sc) === false) + } + + test("isOfferSatisfiesRequirements return false when task is failed more than " + + "MAX_SLAVE_FAILURES times on the given slave") { + val schedulerBackend = createSchedulerBackendForGivenSparkConf(sc) + schedulerBackend.failuresBySlaveId("Slave3") = 2 + + assert(schedulerBackend.isOfferSatisfiesRequirements("Slave3", 10000, 5, sc) === false) + } + + test("isOfferSatisfiesRequirements return false when max core is already acquired") { + val schedulerBackend = createSchedulerBackendForGivenSparkConf(sc) + schedulerBackend.totalCoresAcquired = 10 + + assert(schedulerBackend.isOfferSatisfiesRequirements("Slave1", 10000, 5, sc) === false) --- End diff -- instead of `assert(x === false)`, just do `assert(!x)` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org