storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Giroux <michael_a_gir...@yahoo.com>
Subject Re: Problem moving topology from 1.2.3 to 2.2.0 - tuple distribution across cluster
Date Mon, 16 Nov 2020 12:33:20 GMT
 Hello Paul,
Things are sized (1:160) so that the ontology bolts ARE always faster than the transform bolts. 
Assuming ALL of them are getting workload across the 4 nodes in the cluster.  
With only 1 node things get overloaded pretty quickly I get netty heap space exceptions and
still no tuples to the other nodes.   After disabling LoadAwareShuffleGrouping by setting
topology.disable.loadaware.messaging: true all 4 nodes get tuples and all are happy.
FWIW
    On Friday, November 13, 2020, 03:07:00 PM EST, Paul Jose <paul.j2@ugamsolutions.com>
wrote:  
 
 
 
 Hi Michael,

Could it be that the tuples are getting executed much faster on the ontology bolt workers
than at what rate your transform bolt is executing? Therefore the same node would always be
ready faster than other workers on other nodes..
If your transform bolt is executing faster than how fast your ontology bolts are processing,
then I'm not really sure why you're facing this issue..

Best Regards,
PaulFrom: Michael Giroux <michael_a_giroux@yahoo.com>
Sent: Saturday, November 14, 2020 12:23:43 AM
To: user@storm.apache.org <user@storm.apache.org>
Subject: Problem moving topology from 1.2.3 to 2.2.0 - tuple distribution across cluster Hello,
all,
I have a topology with 16 workers running across 4 nodes.  This topology has a bolt "transform"
with executors=1 producing a stream that is comsumed by a bolt "ontology" with executors=160. 
Everything is configured as shufflegrouping.
With Storm 1.2.3 all of the "ontology" bolts get their fair share of tuples.  When I run
Storm 2.2.0 only the "ontology" bolts that are on the same node as the single "transform"
bolt get tuples.  
Same cluster - same baseline code - only difference is binding in the new maven artifact.
No errors in the logs.  
Any thoughts would be welcome.  Thanks!
---------------------------------------------------------------------------------------Disclaimer----------------------------------------------------------------------------------------------

****Views and opinions expressed in this e-mail belong to their author and do not necessarily
represent views and opinions of Ugam. Our employees are obliged not to make any defamatory
statement or infringe any legal right. Therefore, Ugam does not accept any responsibility
or liability for such statements. The content of this email is confidential and intended for
the recipient specified in message only. It is strictly forbidden to share any part of this
message with any third party, without a written consent of the sender.If you have received
this message by mistake, please reply to this message and follow with its deletion, so that
we can ensure such a mistake does not occur in the future. Warning: Sufficient measures have
been taken to scan any presence of viruses however the recipient should check this email and
any attachments for the presence of viruses as full security of the email cannot be ensured
despite our best efforts.Therefore, Ugam accepts no liability for any damage inflicted by
viewing the content of this email.. ****
Please do not print this email unless it is necessary. Every unprinted email helps the environment.

 
   
Mime
View raw message