storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Spico Florin <spicoflo...@gmail.com>
Subject Re: Workers elasticity
Date Tue, 07 Jan 2014 07:10:18 GMT
Hello, Michael!
  Thank you for your great explications. Regarding the "round robin
fashion", you have answered at the point 1) by invoking the fair scheduler.

Best regards,
   Florin


On Mon, Jan 6, 2014 at 10:52 PM, Michael Rose <michael@fullcontact.com>wrote:

> Each machine can support a configurable number of workers. If a machine
> goes away, it'll attempt to reassign the orphaned workers to other machines
> using the fair scheduler.
>
> 1) Should you not have enough slots (worker slots), your topology will be
> a 'broken' state. However if you allow 4 workers per machine and run your
> topology with 8 workers, you can run 2 minimum machines and still support
> the topology.
>
> If you are to auto-scale your workers, you'll need a long cooldown time
> between changes. A rebalance isn't instant and must allow the topology to
> drain before reshuffling workers. A rebalance will wait
> {TUPLE_TIMEOUT_TIME} before rebalancing. Additionally, when adding workers
> you'll need to trigger a rebalance.
>
> 2) If workers > slots, the topology will attempt to function but
> ultimately freeze as the send buffers to that worker fill. I'm not sure
> what you mean 'round-robin fashion to distribute load' -- the
> ShuffleGrouping will partition work across tasks on an even basis.
>
> 3) Yes, in storm.yaml, supervisor.slots.ports. By default it'll run with 4
> slots per machine. See
> https://github.com/nathanmarz/storm/blob/master/conf/defaults.yaml#L77
>
> Michael Rose (@Xorlev <https://twitter.com/xorlev>)
> Senior Platform Engineer, FullContact <http://www.fullcontact.com/>
> michael@fullcontact.com
>
>
> On Mon, Jan 6, 2014 at 11:08 AM, Spico Florin <spicoflorin@gmail.com>wrote:
>
>> Hello!
>>  I'm newbie to storm and also to Amazon Cloud. I have the following
>> scenario:
>>
>>   1. I have topology that runs on 3 workers on EC2.
>>   2. Due to the increasing load, EC2 intantiates 2 new instances and I
>> have to rebalance to 5 workers.
>>   3. After the resource demand, EC2 released 2 instances and  I'm
>> forgetting to decrease the number of workers to 3.
>>
>> Questions:
>> 1.So, in this case what is the behavior of the application? Will sign an
>> error that there are more workers allocated then existing machines? Or will
>> continue to run as nothing has happened?
>>
>> 2.More generally, what is the behavior of the application that declares
>> more workers than the number of instances? Do we have a round robin fashion
>> to distribute load among the workers?
>>
>> 3.Can I declare more workers on the same machine? If yes how?
>>
>> I look forward for your answers.
>>
>> Regards,
>>   Florin
>>
>>
>

Mime
View raw message