From user-return-14806-archive-asf-public=cust-asf.ponee.io@storm.apache.org Fri Nov 13 20:07:01 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mxout1-ec2-va.apache.org (mxout1-ec2-va.apache.org [3.227.148.255]) by mx-eu-01.ponee.io (Postfix) with ESMTPS id EBF2C180658 for ; Fri, 13 Nov 2020 21:07:00 +0100 (CET) Received: from mail.apache.org (mailroute1-lw-us.apache.org [207.244.88.153]) by mxout1-ec2-va.apache.org (ASF Mail Server at mxout1-ec2-va.apache.org) with SMTP id 0C91E4B1F9 for ; Fri, 13 Nov 2020 20:07:00 +0000 (UTC) Received: (qmail 16989 invoked by uid 500); 13 Nov 2020 20:06:59 -0000 Mailing-List: contact user-help@storm.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.apache.org Delivered-To: mailing list user@storm.apache.org Received: (qmail 16979 invoked by uid 99); 13 Nov 2020 20:06:58 -0000 Received: from spamproc1-he-de.apache.org (HELO spamproc1-he-de.apache.org) (116.203.196.100) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Nov 2020 20:06:58 +0000 Received: from localhost (localhost [127.0.0.1]) by spamproc1-he-de.apache.org (ASF Mail Server at spamproc1-he-de.apache.org) with ESMTP id 279C81FF39C for ; Fri, 13 Nov 2020 20:06:58 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamproc1-he-de.apache.org X-Spam-Flag: NO X-Spam-Score: 1.502 X-Spam-Level: * X-Spam-Status: No, score=1.502 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.2, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SORBS_WEB=1.5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamproc1-he-de.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=ugamsolutions.com Received: from mx1-he-de.apache.org ([116.203.227.195]) by localhost (spamproc1-he-de.apache.org [116.203.196.100]) (amavisd-new, port 10024) with ESMTP id eb0ckMd5OeZQ for ; Fri, 13 Nov 2020 20:06:56 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=185.58.86.177; helo=eu-smtp-delivery-177.mimecast.com; envelope-from=paul.j2@ugamsolutions.com; receiver= Received: from eu-smtp-delivery-177.mimecast.com (eu-smtp-delivery-177.mimecast.com [185.58.86.177]) by mx1-he-de.apache.org (ASF Mail Server at mx1-he-de.apache.org) with ESMTPS id 8AD367F7B2 for ; Fri, 13 Nov 2020 20:06:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ugamsolutions.com; s=mimecast20200804; t=1605298008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=cN0XFh/L70K7VZGVzrZol7jC3X+GgRGtEZdKrlTA1/A=; b=Ze/YORdjU7ReWvB4l+54tCsNPzhPsanAiiVXEx+xPTLXjzJjHOtfqWxmmJUYKKuK+jwj6w 2fnbzvb/7fc9UfG3/a/x5ieU4SSKIk3ZDz5PceLjfpI/2Rv2QihgOoC0CAqIEZzaJ8X0bP qKhMBq6Mge9UHzFE7Ugw+AyIM8Cimu72n3R0Ni8Mz9ufcXXpeo5Bzi8bO3Nwx3N2CRM+qA lgYgZIOSayfPsO7/lxpUFRPpP8JDpvCS7kIMPlkfsKJ35gt4Fc4rd7zDBiDcni7m3goKCX bjMf9pL/GpFqnpTrDksr4xapUR2f7eKTyLYWoIBMQ44D48p7j98d0owkU5/FAA== Received: from APC01-PU1-obe.outbound.protection.outlook.com (mail-pu1apc01lp2054.outbound.protection.outlook.com [104.47.126.54]) (Using TLS) by relay.mimecast.com with ESMTP id uk-mta-57-pXl9XMW1MW2zpBcnOF2ZJA-1; Fri, 13 Nov 2020 20:06:43 +0000 X-MC-Unique: pXl9XMW1MW2zpBcnOF2ZJA-1 Received: from SG2PR04MB2667.apcprd04.prod.outlook.com (2603:1096:4:61::13) by SG2PR04MB3612.apcprd04.prod.outlook.com (2603:1096:4:99::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov 2020 20:06:39 +0000 Received: from SG2PR04MB2667.apcprd04.prod.outlook.com ([fe80::2471:23ac:adf4:a5af]) by SG2PR04MB2667.apcprd04.prod.outlook.com ([fe80::2471:23ac:adf4:a5af%6]) with mapi id 15.20.3541.026; Fri, 13 Nov 2020 20:06:39 +0000 From: Paul Jose To: "user@storm.apache.org" Subject: Re: Problem moving topology from 1.2.3 to 2.2.0 - tuple distribution across cluster Thread-Topic: Problem moving topology from 1.2.3 to 2.2.0 - tuple distribution across cluster Thread-Index: AQHWue8rcJWcWH/OZUKlvvO8+7VXb6nGdemq Date: Fri, 13 Nov 2020 20:06:38 +0000 Message-ID: References: <1191234055.5208893.1605293623024.ref@mail.yahoo.com>,<1191234055.5208893.1605293623024@mail.yahoo.com> In-Reply-To: <1191234055.5208893.1605293623024@mail.yahoo.com> Accept-Language: en-GB, en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [106.51.18.25] x-ms-publictraffictype: Email x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: a5713d14-a1f3-4eb3-8f0a-08d8880fa1a9 x-ms-traffictypediagnostic: SG2PR04MB3612: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:6108 x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0 x-microsoft-antispam-message-info: 7T/+BbCxnaqGgX3/bz+PNfW75GBMhaChuBt2iZJzL2yHK66MJJbqIbPm6rPuswCADC1aCcxXQQ7Y7PO6O2hNUZG2umieWlbHUIm/U+ZirugruhokGL2QimkExM5sIBTKNalinhOst9CrMlJhMaj8e/O3PqAGYGPBjVhXTvpZOu0eKjG584MAv5e+vvww7SHOhzgPCo3lnPis8bE5o3MVLinzS4kEGmnNxxnlTtzSUodv1jqfnP5iuMGgBs4eJNj80sxXQAyD/lqXbfd1+z8XcV+IOFD5TXDUuswuq5mpZ0gA1D7Vt1gXeFjKggg16V1k48uZsA4XMGYA1yib8dcbCA== x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SG2PR04MB2667.apcprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(136003)(366004)(39850400004)(346002)(396003)(52536014)(7696005)(316002)(66476007)(8676002)(55236004)(53546011)(26005)(6506007)(2906002)(71200400001)(8936002)(66946007)(478600001)(55016002)(66556008)(91956017)(86362001)(76116006)(9686003)(64756008)(33656002)(66446008)(6916009)(186003)(5660300002);DIR:OUT;SFP:1101 x-ms-exchange-antispam-messagedata: Bu8zIZj6+QQJTgO6NR9XlA2+9E256ktDihKvz8Z/2fk6MnJrTZKEwN6vlYJg4suUgcmJXsNWXGJ2Ebqz7DVvb/GN2eULBQPqg+Di/n11RXDhqiIvkYyY9Z3N+jg9dkhyFaZVxYW2fJ6fba4s4Cp8Pi0aaUQTBF9qvLQ5/ZbcsZtkSxbmf82nU26+oQLAGRh8UdYLAaekvINSXwQxGl7qRc+ySeXSmOYaNICTl0SH+DDf1kpriI2BwLUC4I0/wik67dpok+2GrT5nN8/79q2TeVrPrx+SnZIErn7TUg6tzZjLdzp8BJrOCbVgrPPIOeeDVIMjcBxHX8UDHVIHwXQkPEDV6RUDiTHGbbF+/5YzH2mV0y1JsV/RGWctOGwScLD75lCif0W+xQyRO/XZyIUFBk4D6tCGa2KXHGeZeVrwsjcMGXIQtyBkO49brEIthYXjYye0Zuv2/eGeeHqaeycFhSOpa+mB39nrZ82cG3CCPB/LiHrDFGp8nsSPO2FRTkC7uATe77wwTQ8WYbAv2TDH/tihG85ENUfYcZGkZh7UZAelaWjRCKsSOvcQQIhpZ+DLxvVBISDbqodKaDdHygW+5kBY7VPDx5FrE5Wal7/kKhTpL0rlAXXhtWJ1BF07G2YiWhKyYUZ2qhnhaZzi2o27yJc2wawwKDr6J139So8l0wipsthWIp++tbqTOBB6R8fVSAzOBUQ6sXLmhLmc/cw6NS0d37j2yLx+nnfUK9iIJNPZYY8hFqvpXLNAGPR/X4yXZI0jsA9nnORxJhd8d951hF7/rcAW+T1LycGmblPrnwMa4JH3Aoi3wRT1EBgpXh03IEuvIfVW+pRUS33KkPrNOS1uwrrzmn4NXxRWbsJ1A4gDP0xV43NTXN2cPBYSdRSJlEpsgqYLxKGenp0ju4/R/A== x-ms-exchange-transport-forked: True MIME-Version: 1.0 X-OriginatorOrg: ugamsolutions.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SG2PR04MB2667.apcprd04.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: a5713d14-a1f3-4eb3-8f0a-08d8880fa1a9 X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 20:06:38.8877 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 8f0bec01-c95e-4fd7-9f9d-67a2c4d66a04 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 2PBuS0fKH3QQv3Jyv8OOJ42hCYZoyxFv/iO5ePR2QOcMppJBxGy7Ax9X5G5EkMqGBj5btZQzeNF8VU6Xv/NpaiAOiPJaeEMFkkt/gfnjD2Q= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SG2PR04MB3612 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=C77A166 smtp.mailfrom=paul.j2@ugamsolutions.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: ugamsolutions.com Content-Language: en-GB Content-Type: multipart/alternative; boundary="_000_SG2PR04MB266782BDE0B8C85D3FC9E939BFE60SG2PR04MB2667apcp_" --_000_SG2PR04MB266782BDE0B8C85D3FC9E939BFE60SG2PR04MB2667apcp_ Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Michael, Could it be that the tuples are getting executed much faster on the ontolog= y bolt workers than at what rate your transform bolt is executing? Therefor= e the same node would always be ready faster than other workers on other no= des.. If your transform bolt is executing faster than how fast your ontology bolt= s are processing, then I'm not really sure why you're facing this issue.. Best Regards, Paul ________________________________ From: Michael Giroux Sent: Saturday, November 14, 2020 12:23:43 AM To: user@storm.apache.org Subject: Problem moving topology from 1.2.3 to 2.2.0 - tuple distribution a= cross cluster Hello, all, I have a topology with 16 workers running across 4 nodes. This topology ha= s a bolt "transform" with executors=3D1 producing a stream that is comsumed= by a bolt "ontology" with executors=3D160. Everything is configured as sh= ufflegrouping. With Storm 1.2.3 all of the "ontology" bolts get their fair share of tuples= . When I run Storm 2.2.0 only the "ontology" bolts that are on the same no= de as the single "transform" bolt get tuples. Same cluster - same baseline code - only difference is binding in the new m= aven artifact. No errors in the logs. Any thoughts would be welcome. Thanks! ---------------------------------------------------------------------------= ------------Disclaimer-----------------------------------------------------= -----------------------------------------=20 ****Views and opinions expressed in this e-mail belong to their author and= do not necessarily represent views and opinions of Ugam.=20 Our employees are obliged not to make any defamatory statement or infringe = any legal right.=20 Therefore, Ugam does not accept any responsibility or liability for such st= atements. The content of this email is confidential and intended for the re= cipient specified in message only. It is strictly forbidden to share any pa= rt of this message with any third party, without a written consent of the s= ender. If you have received this message by mistake, please reply to this message = and follow with its deletion, so that we can ensure such a mistake does not= occur in the future.=20 Warning: Sufficient measures have been taken to scan any presence of viruse= s however the recipient should check this email and any attachments for the= presence of viruses as full security of the email cannot be ensured despit= e our best efforts. Therefore, Ugam accepts no liability for any damage inflicted by viewing th= e content of this email.. **** Please do not print this email unless it is necessary. Every unprinted emai= l helps the environment.=20 --_000_SG2PR04MB266782BDE0B8C85D3FC9E939BFE60SG2PR04MB2667apcp_ Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

=20
Hi Michael,

Could it be that the tuples are getting executed much faster on the ontolog= y bolt workers than at what rate your transform bolt is executing? Therefor= e the same node would always be ready faster than other workers on other no= des..
If your transform bolt is executing faster than how fast your ontology bolt= s are processing, then I'm not really sure why you're facing this issue..
Best Regards,
Paul

From: Michael Giroux <mi= chael_a_giroux@yahoo.com>
Sent: Saturday, November 14, 2020 12:23:43 AM
To: user@storm.apache.org <user@storm.apache.org>
Subject: Problem moving topology from 1.2.3 to 2.2.0 - tuple distrib= ution across cluster
 
Hello, all,

I have a topology with 16 workers running across 4 nodes.  This topolo= gy has a bolt "transform" with executors=3D1 producing a stream t= hat is comsumed by a bolt "ontology" with executors=3D160.  = Everything is configured as shufflegrouping.

With Storm 1.2.3 all of the "ontology" bolts get their fair share= of tuples.  When I run Storm 2.2.0 only the "ontology" bolt= s that are on the same node as the single "transform" bolt get tu= ples.  

Same cluster - same baseline code - only difference is binding in the new m= aven artifact.

No errors in the logs.  

Any thoughts would be welcome.  Thanks!

---------------------------------------------------------------------------= ------------Disclaimer-----------------------------------------------------= -----------------------------------------=20
****Views and opinions expressed in this e-mail belong to their author and= do not necessarily represent views and opinions of Ugam.=20 Our employees are obliged not to make any defamatory statement or infringe = any legal right.=20 Therefore, Ugam does not accept any responsibility or liability for such st= atements. The content of this email is confidential and intended for the re= cipient specified in message only. It is strictly forbidden to share any pa= rt of this message with any third party, without a written consent of the s= ender. If you have received this message by mistake, please reply to this message = and follow with its deletion, so that we can ensure such a mistake does not= occur in the future.=20 Warning: Sufficient measures have been taken to scan any presence of viruse= s however the recipient should check this email and any attachments for the= presence of viruses as full security of the email cannot be ensured despit= e our best efforts. Therefore, Ugam accepts no liability for any damage inflicted by viewing th= e content of this email.. ****
Please do not print this email unless it is necessary. Every unprinted emai= l helps the environment.=20

=20 --_000_SG2PR04MB266782BDE0B8C85D3FC9E939BFE60SG2PR04MB2667apcp_--