Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CE6AB200BF6 for ; Tue, 27 Dec 2016 03:31:23 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id CCFDF160B3E; Tue, 27 Dec 2016 02:31:23 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 81660160B3B for ; Tue, 27 Dec 2016 03:31:22 +0100 (CET) Received: (qmail 61572 invoked by uid 500); 27 Dec 2016 02:31:10 -0000 Mailing-List: contact dev-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@spark.apache.org Received: (qmail 61106 invoked by uid 99); 27 Dec 2016 02:31:10 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Dec 2016 02:31:10 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 0D4BCC0C7E; Tue, 27 Dec 2016 02:31:10 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.879 X-Spam-Level: * X-Spam-Status: No, score=1.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id d3WnwIhTtdmi; Tue, 27 Dec 2016 02:31:07 +0000 (UTC) Received: from mail-yw0-f176.google.com (mail-yw0-f176.google.com [209.85.161.176]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 185B45FDBB; Tue, 27 Dec 2016 02:31:07 +0000 (UTC) Received: by mail-yw0-f176.google.com with SMTP id v81so59782704ywb.2; Mon, 26 Dec 2016 18:31:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=zCRjNoM49lqLgbZgHZfa7jWWUfKKPprgwCl2vMOeuX0=; b=uosIQGI3x8vOXCPgsXv39JVc/fboqW+7sAiWHMT1bKUR9gMTqEP97YMnWjPIu819uK 6P4GsD+I5NTncUYVeQi5HXBr24btJr34fIsZhvKHLnF5HKRJESZJeph+rnttsE0AYREC LSFvXNdg7EOJHx49/tuQMoD/xA/X+Z/kXHSc1fPmMUF75+NeIQw2ZdHWyLuCHt58PaSc lobJUF/2ZxRqb/EMPVERcZrGkqhbRdWaKuha/+T5uM52e0lgQI3+YApwtkA/fsqcD2Ho m1nvul+/mq+OcBl7ejKgaGVnJfIZFGtihm9lWTWbBW41sH98qr05QAzU5m+XHerT1poj +Gnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=zCRjNoM49lqLgbZgHZfa7jWWUfKKPprgwCl2vMOeuX0=; b=jiqamOr/6mcu6zwtTAI7UJUK/keFtiKXkWllcixIX4CnU3gZqjgETloxpejcGQcD/Z xt8Gf60K4sG6mQEP5xOwO4OpT0TPNVFRg8bnWsGwzlcY+mK/5VVlR0Y4+kiaCx8L4wEm wicCqq/7jBYPKSI03Hj35wBsEr+jn8U8xOa9gcSNF2MDOn5kIF5Qb1RZQeTlLeuX4yo0 xlrYwI8x3WqtLuVu/m7kDzDQkxU3xCg5pwSCKE9HMujLsQi1D0g78NfCIWfJMY2ZcxLs YXu4Mmw7d7ewDTnEURA0ATF857DDpGq0XKnGZg9BSQ0mHydmWyGSsi1UlTjgrflMZDDK n4MQ== X-Gm-Message-State: AIkVDXJrZNIMULqyQQT/lpNbcMVaokbQLOz/RneLD4BU3n+AXYWKqcgN8z/9OuC+F2UE1H++5zYxd4G46xw/ow== X-Received: by 10.13.221.215 with SMTP id g206mr24089199ywe.350.1482805857744; Mon, 26 Dec 2016 18:30:57 -0800 (PST) MIME-Version: 1.0 Received: by 10.13.194.130 with HTTP; Mon, 26 Dec 2016 18:30:56 -0800 (PST) In-Reply-To: References: <320478199.533931482187422891.JavaMail.root@zappa> <683244062.533951482187531504.JavaMail.root@zappa> From: "Chawla,Sumit " Date: Mon, 26 Dec 2016 18:30:56 -0800 Message-ID: Subject: Re: Mesos Spark Fine Grained Execution - CPU count To: Michael Gummelt Cc: Davies Liu , Dev , Timothy Chen , Mehdi Meziane , "user@mesos.apache.org" , User , "dev@spark.apache.org" Content-Type: multipart/alternative; boundary=94eb2c075b526bfea805449aa49b archived-at: Tue, 27 Dec 2016 02:31:24 -0000 --94eb2c075b526bfea805449aa49b Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable What is the expected effect of reducing the mesosExecutor.cores to zero? What functionality of executor is impacted? Is the impact is just that it just behaves like a regular process? Regards Sumit Chawla On Mon, Dec 26, 2016 at 9:25 AM, Michael Gummelt wrote: > > Using 0 for spark.mesos.mesosExecutor.cores is better than dynamic > allocation > > Maybe for CPU, but definitely not for memory. Executors never shut down > in fine-grained mode, which means you only elastically grow and shrink CP= U > usage, not memory. > > On Sat, Dec 24, 2016 at 10:14 PM, Davies Liu wrote= : > >> Using 0 for spark.mesos.mesosExecutor.cores is better than dynamic >> allocation, but have to pay a little more overhead for launching a >> task, which should be OK if the task is not trivial. >> >> Since the direct result (up to 1M by default) will also go through >> mesos, it's better to tune it lower, otherwise mesos could become the >> bottleneck. >> >> spark.task.maxDirectResultSize >> >> On Mon, Dec 19, 2016 at 3:23 PM, Chawla,Sumit >> wrote: >> > Tim, >> > >> > We will try to run the application in coarse grain mode, and share the >> > findings with you. >> > >> > Regards >> > Sumit Chawla >> > >> > >> > On Mon, Dec 19, 2016 at 3:11 PM, Timothy Chen >> wrote: >> > >> >> Dynamic allocation works with Coarse grain mode only, we wasn't aware >> >> a need for Fine grain mode after we enabled dynamic allocation suppor= t >> >> on the coarse grain mode. >> >> >> >> What's the reason you're running fine grain mode instead of coarse >> >> grain + dynamic allocation? >> >> >> >> Tim >> >> >> >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane >> >> wrote: >> >> > We will be interested by the results if you give a try to Dynamic >> >> allocation >> >> > with mesos ! >> >> > >> >> > >> >> > ----- Mail Original ----- >> >> > De: "Michael Gummelt" >> >> > =C3=80: "Sumit Chawla" >> >> > Cc: user@mesos.apache.org, dev@mesos.apache.org, "User" >> >> > , dev@spark.apache.org >> >> > Envoy=C3=A9: Lundi 19 D=C3=A9cembre 2016 22h42:55 GMT +01:00 Amster= dam / >> Berlin / >> >> > Berne / Rome / Stockholm / Vienne >> >> > Objet: Re: Mesos Spark Fine Grained Execution - CPU count >> >> > >> >> > >> >> >> Is this problem of idle executors sticking around solved in Dynami= c >> >> >> Resource Allocation? Is there some timeout after which Idle >> executors >> >> can >> >> >> just shutdown and cleanup its resources. >> >> > >> >> > Yes, that's exactly what dynamic allocation does. But again I have >> no >> >> idea >> >> > what the state of dynamic allocation + mesos is. >> >> > >> >> > On Mon, Dec 19, 2016 at 1:32 PM, Chawla,Sumit < >> sumitkchawla@gmail.com> >> >> > wrote: >> >> >> >> >> >> Great. Makes much better sense now. What will be reason to have >> >> >> spark.mesos.mesosExecutor.cores more than 1, as this number doesn'= t >> >> include >> >> >> the number of cores for tasks. >> >> >> >> >> >> So in my case it seems like 30 CPUs are allocated to executors. A= nd >> >> there >> >> >> are 48 tasks so 48 + 30 =3D 78 CPUs. And i am noticing this gap = of >> 30 is >> >> >> maintained till the last task exits. This explains the gap. >> Thanks >> >> >> everyone. I am still not sure how this number 30 is calculated. = ( >> Is >> >> it >> >> >> dynamic based on current resources, or is it some configuration. = I >> >> have 32 >> >> >> nodes in my cluster). >> >> >> >> >> >> Is this problem of idle executors sticking around solved in Dynami= c >> >> >> Resource Allocation? Is there some timeout after which Idle >> executors >> >> can >> >> >> just shutdown and cleanup its resources. >> >> >> >> >> >> >> >> >> Regards >> >> >> Sumit Chawla >> >> >> >> >> >> >> >> >> On Mon, Dec 19, 2016 at 12:45 PM, Michael Gummelt < >> >> mgummelt@mesosphere.io> >> >> >> wrote: >> >> >>> >> >> >>> > I should preassume that No of executors should be less than >> number >> >> of >> >> >>> > tasks. >> >> >>> >> >> >>> No. Each executor runs 0 or more tasks. >> >> >>> >> >> >>> Each executor consumes 1 CPU, and each task running on that >> executor >> >> >>> consumes another CPU. You can customize this via >> >> >>> spark.mesos.mesosExecutor.cores >> >> >>> (https://github.com/apache/spark/blob/v1.6.3/docs/running- >> on-mesos.md) >> >> and >> >> >>> spark.task.cpus >> >> >>> (https://github.com/apache/spark/blob/v1.6.3/docs/configuration.m= d >> ) >> >> >>> >> >> >>> On Mon, Dec 19, 2016 at 12:09 PM, Chawla,Sumit < >> sumitkchawla@gmail.com >> >> > >> >> >>> wrote: >> >> >>>> >> >> >>>> Ah thanks. looks like i skipped reading this "Neither will >> executors >> >> >>>> terminate when they=E2=80=99re idle." >> >> >>>> >> >> >>>> So in my job scenario, I should preassume that No of executors >> should >> >> >>>> be less than number of tasks. Ideally one executor should execut= e >> 1 >> >> or more >> >> >>>> tasks. But i am observing something strange instead. I start m= y >> job >> >> with >> >> >>>> 48 partitions for a spark job. In mesos ui i see that number of >> tasks >> >> is 48, >> >> >>>> but no. of CPUs is 78 which is way more than 48. Here i am >> assuming >> >> that 1 >> >> >>>> CPU is 1 executor. I am not specifying any configuration to se= t >> >> number of >> >> >>>> cores per executor. >> >> >>>> >> >> >>>> Regards >> >> >>>> Sumit Chawla >> >> >>>> >> >> >>>> >> >> >>>> On Mon, Dec 19, 2016 at 11:35 AM, Joris Van Remoortere >> >> >>>> wrote: >> >> >>>>> >> >> >>>>> That makes sense. From the documentation it looks like the >> executors >> >> >>>>> are not supposed to terminate: >> >> >>>>> >> >> >>>>> http://spark.apache.org/docs/latest/running-on-mesos.html# >> >> fine-grained-deprecated >> >> >>>>>> >> >> >>>>>> Note that while Spark tasks in fine-grained will relinquish >> cores as >> >> >>>>>> they terminate, they will not relinquish memory, as the JVM do= es >> >> not give >> >> >>>>>> memory back to the Operating System. Neither will executors >> >> terminate when >> >> >>>>>> they=E2=80=99re idle. >> >> >>>>> >> >> >>>>> >> >> >>>>> I suppose your task to executor CPU ratio is low enough that it >> looks >> >> >>>>> like most of the resources are not being reclaimed. If your tas= ks >> >> were using >> >> >>>>> significantly more CPU the amortized cost of the idle executors >> >> would not be >> >> >>>>> such a big deal. >> >> >>>>> >> >> >>>>> >> >> >>>>> =E2=80=94 >> >> >>>>> Joris Van Remoortere >> >> >>>>> Mesosphere >> >> >>>>> >> >> >>>>> On Mon, Dec 19, 2016 at 11:26 AM, Timothy Chen < >> tnachen@gmail.com> >> >> >>>>> wrote: >> >> >>>>>> >> >> >>>>>> Hi Chawla, >> >> >>>>>> >> >> >>>>>> One possible reason is that Mesos fine grain mode also takes u= p >> >> cores >> >> >>>>>> to run the executor per host, so if you have 20 agents running >> Fine >> >> >>>>>> grained executor it will take up 20 cores while it's still >> running. >> >> >>>>>> >> >> >>>>>> Tim >> >> >>>>>> >> >> >>>>>> On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit < >> >> sumitkchawla@gmail.com> >> >> >>>>>> wrote: >> >> >>>>>> > Hi >> >> >>>>>> > >> >> >>>>>> > I am using Spark 1.6. I have one query about Fine Grained >> model in >> >> >>>>>> > Spark. >> >> >>>>>> > I have a simple Spark application which transforms A -> B. >> Its a >> >> >>>>>> > single >> >> >>>>>> > stage application. To begin the program, It starts with 48 >> >> >>>>>> > partitions. >> >> >>>>>> > When the program starts running, in mesos UI it shows 48 >> tasks and >> >> >>>>>> > 48 CPUs >> >> >>>>>> > allocated to job. Now as the tasks get done, the number of >> active >> >> >>>>>> > tasks >> >> >>>>>> > number starts decreasing. How ever, the number of CPUs does >> not >> >> >>>>>> > decrease >> >> >>>>>> > propotionally. When the job was about to finish, there was = a >> >> single >> >> >>>>>> > remaininig task, however CPU count was still 20. >> >> >>>>>> > >> >> >>>>>> > My questions, is why there is no one to one mapping between >> tasks >> >> >>>>>> > and cpus >> >> >>>>>> > in Fine grained? How can these CPUs be released when the jo= b >> is >> >> >>>>>> > done, so >> >> >>>>>> > that other jobs can start. >> >> >>>>>> > >> >> >>>>>> > >> >> >>>>>> > Regards >> >> >>>>>> > Sumit Chawla >> >> >>>>> >> >> >>>>> >> >> >>>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> -- >> >> >>> Michael Gummelt >> >> >>> Software Engineer >> >> >>> Mesosphere >> >> >> >> >> >> >> >> > >> >> > >> >> > >> >> > -- >> >> > Michael Gummelt >> >> > Software Engineer >> >> > Mesosphere >> >> >> >> >> >> -- >> - Davies >> > > > > -- > Michael Gummelt > Software Engineer > Mesosphere > --94eb2c075b526bfea805449aa49b Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
What is the expected effect of reducing the mesosExecutor.= cores to zero? What functionality of executor is impacted? Is the impact is= just that it just behaves like a regular process?

Regards
Sumit Chawla


On Mon, Dec 26, 2016 at 9:25 AM, Michael Gum= melt <mgummelt@mesosphere.io> wrote:
> Using 0 for spa= rk.mesos.mesosExecutor.cores is better than dynamic allocation

=
Maybe for CPU, but definitely not for memory.=C2=A0 Executors = never shut down in fine-grained mode, which means you only elastically grow= and shrink CPU usage, not memory.

On Sat, = Dec 24, 2016 at 10:14 PM, Davies Liu <davies.liu@gmail.com> wrote:
Using 0 for spark.mesos.mesosEx= ecutor.cores is better than dynamic
allocation, but have to pay a little more overhead for launching a
task, which should be OK if the task is not trivial.

Since the direct result (up to 1M by default) will also go through
mesos, it's better to tune it lower, otherwise mesos could become the bottleneck.

spark.task.maxDirectResultSize

On Mon, Dec 19, 2016 at 3:23 PM, Chawla,Sumit <sumitkchawla@gmail.com> wrote: > Tim,
>
> We will try to run the application in coarse grain mode, and share the=
> findings with you.
>
> Regards
> Sumit Chawla
>
>
> On Mon, Dec 19, 2016 at 3:11 PM, Timothy Chen <tnachen@gmail.com> wrote:
>
>> Dynamic allocation works with Coarse grain mode only, we wasn'= t aware
>> a need for Fine grain mode after we enabled dynamic allocation sup= port
>> on the coarse grain mode.
>>
>> What's the reason you're running fine grain mode instead o= f coarse
>> grain + dynamic allocation?
>>
>> Tim
>>
>> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane
>> <mehdi.meziane@ldmobile.net> wrote:
>> > We will be interested by the results if you give a try to Dyn= amic
>> allocation
>> > with mesos !
>> >
>> >
>> > ----- Mail Original -----
>> > De: "Michael Gummelt" <mgummelt@mesosphere.io>
>> > =C3=80: "Sumit Chawla" <sumitkchawla@gmail.com>
>> > Cc: user@mesos.apache.org, dev@mesos.apache.org, "User"
>> > <user@spark.apache.org>, dev@spark.apache.org
>> > Envoy=C3=A9: Lundi 19 D=C3=A9cembre 2016 22h42:55 GMT +01:00 = Amsterdam / Berlin /
>> > Berne / Rome / Stockholm / Vienne
>> > Objet: Re: Mesos Spark Fine Grained Execution - CPU count
>> >
>> >
>> >> Is this problem of idle executors sticking around solved = in Dynamic
>> >> Resource Allocation?=C2=A0 Is there some timeout after wh= ich Idle executors
>> can
>> >> just shutdown and cleanup its resources.
>> >
>> > Yes, that's exactly what dynamic allocation does.=C2=A0 B= ut again I have no
>> idea
>> > what the state of dynamic allocation + mesos is.
>> >
>> > On Mon, Dec 19, 2016 at 1:32 PM, Chawla,Sumit <sumitkchawla@gmail.com= >
>> > wrote:
>> >>
>> >> Great.=C2=A0 Makes much better sense now.=C2=A0 What will= be reason to have
>> >> spark.mesos.mesosExecutor.cores more than 1, as this= number doesn't
>> include
>> >> the number of cores for tasks.
>> >>
>> >> So in my case it seems like 30 CPUs are allocated to exec= utors.=C2=A0 And
>> there
>> >> are 48 tasks so 48 + 30 =3D=C2=A0 78 CPUs.=C2=A0 And i am= noticing this gap of 30 is
>> >> maintained till the last task exits.=C2=A0 This explains = the gap.=C2=A0 =C2=A0Thanks
>> >> everyone.=C2=A0 I am still not sure how this number 30 is= calculated.=C2=A0 ( Is
>> it
>> >> dynamic based on current resources, or is it some configu= ration.=C2=A0 I
>> have 32
>> >> nodes in my cluster).
>> >>
>> >> Is this problem of idle executors sticking around solved = in Dynamic
>> >> Resource Allocation?=C2=A0 Is there some timeout after wh= ich Idle executors
>> can
>> >> just shutdown and cleanup its resources.
>> >>
>> >>
>> >> Regards
>> >> Sumit Chawla
>> >>
>> >>
>> >> On Mon, Dec 19, 2016 at 12:45 PM, Michael Gummelt < >> mgumme= lt@mesosphere.io>
>> >> wrote:
>> >>>
>> >>> >=C2=A0 I should preassume that No of executors sh= ould be less than number
>> of
>> >>> > tasks.
>> >>>
>> >>> No.=C2=A0 Each executor runs 0 or more tasks.
>> >>>
>> >>> Each executor consumes 1 CPU, and each task running o= n that executor
>> >>> consumes another CPU.=C2=A0 You can customize this vi= a
>> >>> spark.mesos.mesosExecutor.cores
>> >>> (https://g= ithub.com/apache/spark/blob/v1.6.3/docs/running-on-mesos.md)<= br> >> and
>> >>> spark.task.cpus
>> >>> (https://gith= ub.com/apache/spark/blob/v1.6.3/docs/configuration.md)
>> >>>
>> >>> On Mon, Dec 19, 2016 at 12:09 PM, Chawla,Sumit <sumitkchawla@gmai= l.com
>> >
>> >>> wrote:
>> >>>>
>> >>>> Ah thanks. looks like i skipped reading this &quo= t;Neither will executors
>> >>>> terminate when they=E2=80=99re idle."
>> >>>>
>> >>>> So in my job scenario,=C2=A0 I should preassume t= hat No of executors should
>> >>>> be less than number of tasks. Ideally one executo= r should execute 1
>> or more
>> >>>> tasks.=C2=A0 But i am observing something strange= instead.=C2=A0 I start my job
>> with
>> >>>> 48 partitions for a spark job. In mesos ui i see = that number of tasks
>> is 48,
>> >>>> but no. of CPUs is 78 which is way more than 48.= =C2=A0 Here i am assuming
>> that 1
>> >>>> CPU is 1 executor.=C2=A0 =C2=A0I am not specifyin= g any configuration to set
>> number of
>> >>>> cores per executor.
>> >>>>
>> >>>> Regards
>> >>>> Sumit Chawla
>> >>>>
>> >>>>
>> >>>> On Mon, Dec 19, 2016 at 11:35 AM, Joris Van Remoo= rtere
>> >>>> <joris@mesosphere.io> wrote:
>> >>>>>
>> >>>>> That makes sense. From the documentation it l= ooks like the executors
>> >>>>> are not supposed to terminate:
>> >>>>>
>> >>>>> http://spark= .apache.org/docs/latest/running-on-mesos.html#
>> fine-grained-deprecated
>> >>>>>>
>> >>>>>> Note that while Spark tasks in fine-grain= ed will relinquish cores as
>> >>>>>> they terminate, they will not relinquish = memory, as the JVM does
>> not give
>> >>>>>> memory back to the Operating System. Neit= her will executors
>> terminate when
>> >>>>>> they=E2=80=99re idle.
>> >>>>>
>> >>>>>
>> >>>>> I suppose your task to executor CPU ratio is = low enough that it looks
>> >>>>> like most of the resources are not being recl= aimed. If your tasks
>> were using
>> >>>>> significantly more CPU the amortized cost of = the idle executors
>> would not be
>> >>>>> such a big deal.
>> >>>>>
>> >>>>>
>> >>>>> =E2=80=94
>> >>>>> Joris Van Remoortere
>> >>>>> Mesosphere
>> >>>>>
>> >>>>> On Mon, Dec 19, 2016 at 11:26 AM, Timothy Che= n <tnachen@gmail.= com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> Hi Chawla,
>> >>>>>>
>> >>>>>> One possible reason is that Mesos fine gr= ain mode also takes up
>> cores
>> >>>>>> to run the executor per host, so if you h= ave 20 agents running Fine
>> >>>>>> grained executor it will take up 20 cores= while it's still running.
>> >>>>>>
>> >>>>>> Tim
>> >>>>>>
>> >>>>>> On Fri, Dec 16, 2016 at 8:41 AM, Chawla,S= umit <
>> sumitk= chawla@gmail.com>
>> >>>>>> wrote:
>> >>>>>> > Hi
>> >>>>>> >
>> >>>>>> > I am using Spark 1.6. I have one que= ry about Fine Grained model in
>> >>>>>> > Spark.
>> >>>>>> > I have a simple Spark application wh= ich transforms A -> B.=C2=A0 Its a
>> >>>>>> > single
>> >>>>>> > stage application.=C2=A0 To begin th= e program, It starts with 48
>> >>>>>> > partitions.
>> >>>>>> > When the program starts running, in = mesos UI it shows 48 tasks and
>> >>>>>> > 48 CPUs
>> >>>>>> > allocated to job.=C2=A0 Now as the t= asks get done, the number of active
>> >>>>>> > tasks
>> >>>>>> > number starts decreasing.=C2=A0 How = ever, the number of CPUs does not
>> >>>>>> > decrease
>> >>>>>> > propotionally.=C2=A0 When the job wa= s about to finish, there was a
>> single
>> >>>>>> > remaininig task, however CPU count w= as still 20.
>> >>>>>> >
>> >>>>>> > My questions, is why there is no one= to one mapping between tasks
>> >>>>>> > and cpus
>> >>>>>> > in Fine grained?=C2=A0 How can these= CPUs be released when the job is
>> >>>>>> > done, so
>> >>>>>> > that other jobs can start.
>> >>>>>> >
>> >>>>>> >
>> >>>>>> > Regards
>> >>>>>> > Sumit Chawla
>> >>>>>
>> >>>>>
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Michael Gummelt
>> >>> Software Engineer
>> >>> Mesosphere
>> >>
>> >>
>> >
>> >
>> >
>> > --
>> > Michael Gummelt
>> > Software Engineer
>> > Mesosphere
>>



--
=C2=A0- Davies



--
Michael Gum= melt
Software Engineer
Mesosphere

--94eb2c075b526bfea805449aa49b--