Return-Path: X-Original-To: apmail-storm-user-archive@minotaur.apache.org Delivered-To: apmail-storm-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 638B210A93 for ; Wed, 2 Apr 2014 15:31:42 +0000 (UTC) Received: (qmail 93513 invoked by uid 500); 2 Apr 2014 15:31:41 -0000 Delivered-To: apmail-storm-user-archive@storm.apache.org Received: (qmail 93456 invoked by uid 500); 2 Apr 2014 15:31:40 -0000 Mailing-List: contact user-help@storm.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.incubator.apache.org Delivered-To: mailing list user@storm.incubator.apache.org Received: (qmail 93439 invoked by uid 99); 2 Apr 2014 15:31:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Apr 2014 15:31:38 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ncleung@gmail.com designates 209.85.213.180 as permitted sender) Received: from [209.85.213.180] (HELO mail-ig0-f180.google.com) (209.85.213.180) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Apr 2014 15:31:34 +0000 Received: by mail-ig0-f180.google.com with SMTP id c1so478663igq.7 for ; Wed, 02 Apr 2014 08:31:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Dcj/FwS+J2015tdYMLXwKTATWHZLwzUn/6/7FgFSuqE=; b=cdnculdLGO750qQInqK32dxHLXqHqZ2qxl3QVfrbyExbQPTxeSCiP02c6eFyF+uuLK XUVkvnuTIyVevLwLdqn9uml+L3ryRzmrzRJh7RVEkEAuYkTybzcZLK320iA+c9CFfMhU HWWjQJJWkb80lijUbCxns5qS6K0tzfC3JfWjzOmTIitg6Ts/CVjOcYameo719Gjem9uH 7TAZfJ35LQ+lnNXkmyRSSv5wAt482CDGQLn+AftViCMM6Kbezt5fb6pDjP8GHjffe5bj vUJsDibQg2CZjT2Mczc0cfQ+zafGXqlxgMRMtPNo86rEgMGbk2u36/HnxMsaSu6jLLCl zjMA== MIME-Version: 1.0 X-Received: by 10.50.143.104 with SMTP id sd8mr2341787igb.18.1396452673439; Wed, 02 Apr 2014 08:31:13 -0700 (PDT) Received: by 10.64.252.161 with HTTP; Wed, 2 Apr 2014 08:31:13 -0700 (PDT) In-Reply-To: References: <193293c4cc954b70b0b590012ed854c8@DBXPR07MB398.eurprd07.prod.outlook.com> Date: Wed, 2 Apr 2014 11:31:13 -0400 Message-ID: Subject: Re: Basic storm question From: Nathan Leung To: user Content-Type: multipart/alternative; boundary=001a1135f1a68bff0004f610f95e X-Virus-Checked: Checked by ClamAV on apache.org --001a1135f1a68bff0004f610f95e Content-Type: text/plain; charset=ISO-8859-1 the extra task/executor is the acker thread. On Tue, Apr 1, 2014 at 9:23 PM, Huiliang Zhang wrote: > I just submitted ExclamationTopology for testing. > > builder.setSpout("word", new TestWordSpout(), 10); > > builder.setBolt("exclaim1", new ExclamationBolt(), 3).shuffleGrouping( > "word"); > > builder.setBolt("exclaim2", new ExclamationBolt(), 2).shuffleGrouping( > "exclaim1"); > > I am supposed to see 15 executors. However, I see 16 executors and 16 > tasks on topology summary on storm UI. The numbers of executors are correct > for the specific spout and bolts and aggregate to 15. Is that a bug in > displaying topology summary? > > My cluster consists of 2 supervisors and each has 4 workers defined. > > Thanks. > > > On Tue, Apr 1, 2014 at 1:43 PM, Nathan Leung wrote: > >> By default supervisor nodes can run up to 4 workers. This is >> configurable in storm.yaml (for example see supervisor.slots.ports here: >> https://github.com/nathanmarz/storm/blob/master/conf/defaults.yaml). >> Memory should be split between the workers. It's a typical Java heap, so >> anything running on that worker process shares the heap. >> >> >> On Tue, Apr 1, 2014 at 4:10 PM, David Crossland wrote: >> >>> On said subject, how does memory allocation work I these cases? >>> Assuming 1 worker per node would you just dump all the memory available >>> into worker.childopts? I guess the memory pool would be shared between the >>> spawned threads as appropriate to their needs? >>> >>> I'm assuming the equivalent options for supervisor/nimbus are fine >>> left at defaults. Given that the workers/spouts/bolts are the working >>> parts of the topology these would where I should target available memory? >>> >>> D >>> >>> *From:* Huiliang Zhang >>> *Sent:* Tuesday, 1 April 2014 19:47 >>> *To:* user@storm.incubator.apache.org >>> >>> Thanks. It should be good if there exist some example figures >>> explaining the relationship between tasks, workers, and threads. >>> >>> >>> On Sat, Mar 29, 2014 at 6:34 AM, Susheel Kumar Gadalay < >>> skgadalay@gmail.com> wrote: >>> >>>> No, a single worker is dedicated to a single topology no matter how >>>> many threads it spawns for different bolts/spouts. >>>> A single worker cannot be shared across multiple topologies. >>>> >>>> On 3/29/14, Nathan Leung wrote: >>>> > From what I have seen, the second topology is run with 1 worker until >>>> you >>>> > kill the first topology or add more worker slots to your cluster. >>>> > >>>> > >>>> > On Sat, Mar 29, 2014 at 2:57 AM, Huiliang Zhang >>>> wrote: >>>> > >>>> >> Thanks. I am still not clear. >>>> >> >>>> >> Do you mean that in a single worker process, there will be multiple >>>> >> threads and each thread will handle part of a topology? If so, what >>>> does >>>> >> the number of workers mean when submitting topology? >>>> >> >>>> >> >>>> >> On Fri, Mar 28, 2014 at 11:18 PM, padma priya chitturi < >>>> >> padmapriya30@gmail.com> wrote: >>>> >> >>>> >>> Hi, >>>> >>> >>>> >>> No, its not the case. No matter how many topologies you submit, the >>>> >>> workers will be shared among the topologies. >>>> >>> >>>> >>> Thanks, >>>> >>> Padma Ch >>>> >>> >>>> >>> >>>> >>> On Sat, Mar 29, 2014 at 5:11 AM, Huiliang Zhang >>>> >>> wrote: >>>> >>> >>>> >>>> Hi, >>>> >>>> >>>> >>>> I have a simple question about storm. >>>> >>>> >>>> >>>> My cluster has just 1 supervisor and 4 ports are defined to run 4 >>>> >>>> workers. I first submit a topology which needs 3 workers. Then I >>>> submit >>>> >>>> another topology which needs 2 workers. Does this mean that the 2nd >>>> >>>> topology will never be run? >>>> >>>> >>>> >>>> Thanks, >>>> >>>> Huiliang >>>> >>>> >>>> >>> >>>> >>> >>>> >> >>>> > >>>> >>> >>> >> > --001a1135f1a68bff0004f610f95e Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
the extra task/executor is the acker thread.


On Tue, Apr 1, 2014 a= t 9:23 PM, Huiliang Zhang <zhlntu@gmail.com> wrote:
I just submitted=A0Exclamat= ionTopology for testing.

=A0 =A0=A0builder.setSpout("word", new TestWordSpout(), 10);

=A0 =A0 builder.setBolt("exclaim1", new ExclamationBolt(), 3).shuffleGrouping("word");

=A0 =A0 builder.setBolt("exclaim2", new ExclamationBolt(), 2).shuffleGrouping("exclaim1"= );

I am supposed to see 15 executors. However, I see 16 executors and= 16 tasks on topology summary on storm UI. The numbers of executors are cor= rect for the specific spout and bolts and aggregate to 15. Is that a bug in= displaying topology summary?=A0

My cluster consists of 2 supervisors and each has 4 workers defined.

=

Thanks.



On Tue, Apr 1, 2014 at= 1:43 PM, Nathan Leung <ncleung@gmail.com> wrote:
By default supervisor nodes= can run up to 4 workers. =A0This is configurable in storm.yaml (for exampl= e see supervisor.slots.ports here:=A0https://github.c= om/nathanmarz/storm/blob/master/conf/defaults.yaml). =A0Memory should b= e split between the workers. =A0It's a typical Java heap, so anything r= unning on that worker process shares the heap.


On Tue, = Apr 1, 2014 at 4:10 PM, David Crossland <david@elastacloud.com>= wrote:
On said subject, how does memory allocation work I these cases? Assumi= ng 1 worker per node would you just dump all the memory available into work= er.childopts? I guess the memory pool would be shared between the spawned t= hreads as appropriate to their needs?

I'm assuming the equivalent options=A0for supervisor/nimbus are fi= ne left at defaults.=A0 Given that the workers/spouts/bolts are the working= parts of the topology these would where I should target available memory?<= br>

D

From:= =A0Huiliang Zhang
Sent:=A0Tuesday, 1 April 2014 19:47
To:=A0user@storm.incubator.apache.org

Thanks. It should be good if there exist some example figu= res explaining the relationship between tasks, workers, and threads.


On Sat, Mar 29, 2014 at 6:34 AM, Susheel Kumar G= adalay <skgadalay@gmai= l.com> wrote:
No, a single worker is dedicated to a single topology no matter how
many threads it spawns for different bolts/spouts.
A single worker cannot be shared across multiple topologies.

On 3/29/14, Nathan Leung <ncleung@gmail.com> wrote:
> From what I have seen, the second topology is run with 1 worker until = you
> kill the first topology or add more worker slots to your cluster.
>
>
> On Sat, Mar 29, 2014 at 2:57 AM, Huiliang Zhang <zhlntu@gmail.com> wrote:
>
>> Thanks. I am still not clear.
>>
>> Do you mean that in a single worker process, there will be multipl= e
>> threads and each thread will handle part of a topology? If so, wha= t does
>> the number of workers mean when submitting topology?
>>
>>
>> On Fri, Mar 28, 2014 at 11:18 PM, padma priya chitturi <
>> padmap= riya30@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> No, its not the case. No matter how many topologies you submit= , the
>>> workers will be shared among the topologies.
>>>
>>> Thanks,
>>> Padma Ch
>>>
>>>
>>> On Sat, Mar 29, 2014 at 5:11 AM, Huiliang Zhang <zhlntu@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have a simple question about storm.
>>>>
>>>> My cluster has just 1 supervisor and 4 ports are defined t= o run 4
>>>> workers. I first submit a topology which needs 3 workers. = Then I submit
>>>> another topology which needs 2 workers. Does this mean tha= t the 2nd
>>>> topology will never be run?
>>>>
>>>> Thanks,
>>>> Huiliang
>>>>
>>>
>>>
>>
>




--001a1135f1a68bff0004f610f95e--