Return-Path: X-Original-To: apmail-storm-user-archive@minotaur.apache.org Delivered-To: apmail-storm-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D02C6102AA for ; Wed, 2 Apr 2014 18:02:09 +0000 (UTC) Received: (qmail 25861 invoked by uid 500); 2 Apr 2014 18:01:46 -0000 Delivered-To: apmail-storm-user-archive@storm.apache.org Received: (qmail 25819 invoked by uid 500); 2 Apr 2014 18:01:44 -0000 Mailing-List: contact user-help@storm.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.incubator.apache.org Delivered-To: mailing list user@storm.incubator.apache.org Received: (qmail 25411 invoked by uid 99); 2 Apr 2014 18:01:26 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Apr 2014 18:01:25 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of zhlntu@gmail.com designates 209.85.223.169 as permitted sender) Received: from [209.85.223.169] (HELO mail-ie0-f169.google.com) (209.85.223.169) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Apr 2014 18:01:21 +0000 Received: by mail-ie0-f169.google.com with SMTP id to1so617557ieb.28 for ; Wed, 02 Apr 2014 11:01:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=rb1JrZxp2OR3kpSy44VbkTWQlB94XRQAE+ZXBsWo9js=; b=Y8Tqf1GvfsX448ZkiDGXWbT2uY8llXKff3O4RaIYyzn/VyHrl5P8ZkgqpqoKgDzpII P9VLXc4Et4CH7Aypwmj6cWgJ5UfMmqIFhdWiPME6lDTcETvdUmXXEq7swox32Qq7OZWU CGQP2xrlExQjVv8R02hGDj/xr1/FlkXcfd3+EwdVLRsXxk7WMb2Dd6VO9RVL/2fD5hUw tUfqh9gXH93Hs9rb7Du3cobQS+q49nSNwTwcWKM7x8wUjMvDSdJuE2Yo7LqFwL5NrQ5m 6QrO9Vv0xZ5bz1jij1qxchbOAOKYPYnzIpVZBNdSKv6oJ5M25O5JJ5pwDTearod0bPnZ 4KWQ== MIME-Version: 1.0 X-Received: by 10.43.94.9 with SMTP id bw9mr1794145icc.19.1396461660729; Wed, 02 Apr 2014 11:01:00 -0700 (PDT) Received: by 10.64.98.10 with HTTP; Wed, 2 Apr 2014 11:01:00 -0700 (PDT) In-Reply-To: References: <193293c4cc954b70b0b590012ed854c8@DBXPR07MB398.eurprd07.prod.outlook.com> Date: Wed, 2 Apr 2014 11:01:00 -0700 Message-ID: Subject: Re: Basic storm question From: Huiliang Zhang To: user@storm.incubator.apache.org Content-Type: multipart/alternative; boundary=bcaec517cbe23af84e04f61311e6 X-Virus-Checked: Checked by ClamAV on apache.org --bcaec517cbe23af84e04f61311e6 Content-Type: text/plain; charset=ISO-8859-1 Hi Nathan, The last bolt just emits the tuples and no more bolt in the topology will consume and ack the tuples. Do you mean that storm automatically creates an extra executor to deal with the tuples? Thanks, Huiliang On Wed, Apr 2, 2014 at 8:31 AM, Nathan Leung wrote: > the extra task/executor is the acker thread. > > > On Tue, Apr 1, 2014 at 9:23 PM, Huiliang Zhang wrote: > >> I just submitted ExclamationTopology for testing. >> >> builder.setSpout("word", new TestWordSpout(), 10); >> >> builder.setBolt("exclaim1", new ExclamationBolt(), >> 3).shuffleGrouping("word"); >> >> builder.setBolt("exclaim2", new ExclamationBolt(), >> 2).shuffleGrouping("exclaim1"); >> >> I am supposed to see 15 executors. However, I see 16 executors and 16 >> tasks on topology summary on storm UI. The numbers of executors are correct >> for the specific spout and bolts and aggregate to 15. Is that a bug in >> displaying topology summary? >> >> My cluster consists of 2 supervisors and each has 4 workers defined. >> >> Thanks. >> >> >> On Tue, Apr 1, 2014 at 1:43 PM, Nathan Leung wrote: >> >>> By default supervisor nodes can run up to 4 workers. This is >>> configurable in storm.yaml (for example see supervisor.slots.ports here: >>> https://github.com/nathanmarz/storm/blob/master/conf/defaults.yaml). >>> Memory should be split between the workers. It's a typical Java heap, so >>> anything running on that worker process shares the heap. >>> >>> >>> On Tue, Apr 1, 2014 at 4:10 PM, David Crossland wrote: >>> >>>> On said subject, how does memory allocation work I these cases? >>>> Assuming 1 worker per node would you just dump all the memory available >>>> into worker.childopts? I guess the memory pool would be shared between the >>>> spawned threads as appropriate to their needs? >>>> >>>> I'm assuming the equivalent options for supervisor/nimbus are fine >>>> left at defaults. Given that the workers/spouts/bolts are the working >>>> parts of the topology these would where I should target available memory? >>>> >>>> D >>>> >>>> *From:* Huiliang Zhang >>>> *Sent:* Tuesday, 1 April 2014 19:47 >>>> *To:* user@storm.incubator.apache.org >>>> >>>> Thanks. It should be good if there exist some example figures >>>> explaining the relationship between tasks, workers, and threads. >>>> >>>> >>>> On Sat, Mar 29, 2014 at 6:34 AM, Susheel Kumar Gadalay < >>>> skgadalay@gmail.com> wrote: >>>> >>>>> No, a single worker is dedicated to a single topology no matter how >>>>> many threads it spawns for different bolts/spouts. >>>>> A single worker cannot be shared across multiple topologies. >>>>> >>>>> On 3/29/14, Nathan Leung wrote: >>>>> > From what I have seen, the second topology is run with 1 worker >>>>> until you >>>>> > kill the first topology or add more worker slots to your cluster. >>>>> > >>>>> > >>>>> > On Sat, Mar 29, 2014 at 2:57 AM, Huiliang Zhang >>>>> wrote: >>>>> > >>>>> >> Thanks. I am still not clear. >>>>> >> >>>>> >> Do you mean that in a single worker process, there will be multiple >>>>> >> threads and each thread will handle part of a topology? If so, what >>>>> does >>>>> >> the number of workers mean when submitting topology? >>>>> >> >>>>> >> >>>>> >> On Fri, Mar 28, 2014 at 11:18 PM, padma priya chitturi < >>>>> >> padmapriya30@gmail.com> wrote: >>>>> >> >>>>> >>> Hi, >>>>> >>> >>>>> >>> No, its not the case. No matter how many topologies you submit, the >>>>> >>> workers will be shared among the topologies. >>>>> >>> >>>>> >>> Thanks, >>>>> >>> Padma Ch >>>>> >>> >>>>> >>> >>>>> >>> On Sat, Mar 29, 2014 at 5:11 AM, Huiliang Zhang >>>>> >>> wrote: >>>>> >>> >>>>> >>>> Hi, >>>>> >>>> >>>>> >>>> I have a simple question about storm. >>>>> >>>> >>>>> >>>> My cluster has just 1 supervisor and 4 ports are defined to run 4 >>>>> >>>> workers. I first submit a topology which needs 3 workers. Then I >>>>> submit >>>>> >>>> another topology which needs 2 workers. Does this mean that the >>>>> 2nd >>>>> >>>> topology will never be run? >>>>> >>>> >>>>> >>>> Thanks, >>>>> >>>> Huiliang >>>>> >>>> >>>>> >>> >>>>> >>> >>>>> >> >>>>> > >>>>> >>>> >>>> >>> >> > --bcaec517cbe23af84e04f61311e6 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Nathan,

The last bolt just emi= ts the tuples and no more bolt in the topology will consume and ack the tup= les. Do you mean that storm automatically creates an extra executor to deal= with the tuples?

Thanks,
Huiliang


On Wed, Apr 2, 2014 at 8:31 AM, Nat= han Leung <ncleung@gmail.com> wrote:
the extra task/executor is = the acker thread.


On Tue, Apr 1, 2014 at 9:23 PM, Huiliang= Zhang <zhlntu@gmail.com> wrote:
I just submitted=A0Exclamat= ionTopology for testing.

=A0 =A0=A0builder.setSpout("word", new TestWordSpout(), 10);

=A0 =A0 builder.setBolt("exclaim1", new ExclamationBolt(), 3).shuffleGrouping("word");

=A0 =A0 builder.setBolt("exclaim2", new ExclamationBolt(), 2).shuffleGrouping("exclaim1"= );

I am supposed to see 15 executors. However, I see 16 executors and= 16 tasks on topology summary on storm UI. The numbers of executors are cor= rect for the specific spout and bolts and aggregate to 15. Is that a bug in= displaying topology summary?=A0

My cluster consists of 2 supervisors and each has 4 workers defined.

=

Thanks.



On Tue, Apr 1, 2014 at 1:43 PM, Nathan Leung <ncleu= ng@gmail.com> wrote:
By default supervisor nodes= can run up to 4 workers. =A0This is configurable in storm.yaml (for exampl= e see supervisor.slots.ports here:=A0https://github.c= om/nathanmarz/storm/blob/master/conf/defaults.yaml). =A0Memory should b= e split between the workers. =A0It's a typical Java heap, so anything r= unning on that worker process shares the heap.


On Tue, = Apr 1, 2014 at 4:10 PM, David Crossland <david@elastacloud.com>= wrote:
On said subject, how does memory allocation work I these cases? Assumi= ng 1 worker per node would you just dump all the memory available into work= er.childopts? I guess the memory pool would be shared between the spawned t= hreads as appropriate to their needs?

I'm assuming the equivalent options=A0for supervisor/nimbus are fi= ne left at defaults.=A0 Given that the workers/spouts/bolts are the working= parts of the topology these would where I should target available memory?<= br>

D

From:= =A0Huiliang Zhang
Sent:=A0Tuesday, 1 April 2014 19:47
To:=A0user@storm.incubator.apache.org

Thanks. It should be good if there exist some example figu= res explaining the relationship between tasks, workers, and threads.


On Sat, Mar 29, 2014 at 6:34 AM, Susheel Kumar G= adalay <skgadalay@gmai= l.com> wrote:
No, a single worker is dedicated to a single topology no matter how
many threads it spawns for different bolts/spouts.
A single worker cannot be shared across multiple topologies.

On 3/29/14, Nathan Leung <ncleung@gmail.com> wrote:
> From what I have seen, the second topology is run with 1 worker until = you
> kill the first topology or add more worker slots to your cluster.
>
>
> On Sat, Mar 29, 2014 at 2:57 AM, Huiliang Zhang <zhlntu@gmail.com> wrote:
>
>> Thanks. I am still not clear.
>>
>> Do you mean that in a single worker process, there will be multipl= e
>> threads and each thread will handle part of a topology? If so, wha= t does
>> the number of workers mean when submitting topology?
>>
>>
>> On Fri, Mar 28, 2014 at 11:18 PM, padma priya chitturi <
>> padmap= riya30@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> No, its not the case. No matter how many topologies you submit= , the
>>> workers will be shared among the topologies.
>>>
>>> Thanks,
>>> Padma Ch
>>>
>>>
>>> On Sat, Mar 29, 2014 at 5:11 AM, Huiliang Zhang <zhlntu@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have a simple question about storm.
>>>>
>>>> My cluster has just 1 supervisor and 4 ports are defined t= o run 4
>>>> workers. I first submit a topology which needs 3 workers. = Then I submit
>>>> another topology which needs 2 workers. Does this mean tha= t the 2nd
>>>> topology will never be run?
>>>>
>>>> Thanks,
>>>> Huiliang
>>>>
>>>
>>>
>>
>





--bcaec517cbe23af84e04f61311e6--