From user-return-14373-archive-asf-public=cust-asf.ponee.io@storm.apache.org Wed Apr 10 02:14:13 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 8C378180629 for ; Wed, 10 Apr 2019 04:14:13 +0200 (CEST) Received: (qmail 12508 invoked by uid 500); 10 Apr 2019 01:54:34 -0000 Mailing-List: contact user-help@storm.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.apache.org Delivered-To: mailing list user@storm.apache.org Received: (qmail 12498 invoked by uid 99); 10 Apr 2019 01:54:34 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Apr 2019 01:54:34 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id DE457C1E2F for ; Wed, 10 Apr 2019 02:14:11 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.799 X-Spam-Level: * X-Spam-Status: No, score=1.799 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=yahoo.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id XV8AnKpn-P19 for ; Wed, 10 Apr 2019 02:14:10 +0000 (UTC) Received: from sonic301-30.consmr.mail.ne1.yahoo.com (sonic301-30.consmr.mail.ne1.yahoo.com [66.163.184.199]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 0AB34611EA for ; Wed, 10 Apr 2019 02:07:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1554862071; bh=6X1AE0d42idukbf42bSnZGLzlQBZGwiDJfRheOrdFoQ=; h=Date:From:To:In-Reply-To:References:Subject:From:Subject; b=nBWDMbLfARaR/MgQe3h6tlmOi0TunZj74Qh9Y4p2fY4hX5Hlu4/eI58hPCFZH7+5i5J85y8fOEI1+Vy/Og3LARA8P3qn3FTj32unU8msg0e9bSjUPAenKwrVC2AEXAN85aYMWuBdlyqzY8lNueM0NUVrNZRNvMAqON4wnHZCWJX8X099kqbRFetfDDFaeSlVk2wPnnwrEY9+O/es/4wYwcpebun7KDs7EXMvqtRZwFvbrPdWimgV3QZIyR/aIFSaKqFj5IJyQpynmrresYuEOkJvNwwTmeDaA9ogB2lvMusYpyBEBKDAP02R5Es++xRef7q4P8MT06PIthv/vBRWdg== X-YMail-OSG: HiQQ.aQVM1kUdkgkyI9GhGr3lDtQf2u63MBO44NomXmx5fIDHVYc5Yf.L1HLUfP PLTRvphOcnF3LzSv4pwoXNEyhGFcWz6LHrmLXTmv1b8T843fD1KHFUDBB8G4xHoPDeaUwkiNoC6D RC_uWzimyi0UqP2rm.OJ26tWaZz5G4NMbiNXwI8Pn1foZlBb_UZTlV1UVZz8WspDQSP2PMsBcaXG 8YyMTIpTk_V6M8XIO626sRsk46EGPThvUU03Uuslrq_sTzhqoNUdXzrNfNJAfF1a.QbZauBNN8Tm 3v5hoqOoPlTLgGbG3h9QhJLa4SX6L_Q4hcSiKKgX03jHgRiq2PI7z1ZqiJl9zj8lTEN4fp6fvMH7 j4TZbjseWxpOc.xLuY2Llk7vaW7GDaqc6u2.y6htWJrjnyMYT5ciFb1bPhI0_BbSTcTwkWw6IQOx dbuoNQlPyua73pbbKIW83DniKngA2A2wyxSZFSZiSDqyyKvfbzvxwXwdWAMuVXgqu9vzhprD2PRi ZhDfnX6wcfCbREemdVkSi7Tm9L1Q_xzI_AC7tH647owU7AGw44bcNLcPfZ0f1Q6WdYDFEEAgY4Jq 8zs4QNHES5IWgT0X4u2ygOnzx9v7IF8bOYQUk8vdxZDdRdVNLtwgUidPOU0bsJ5YGzobRD..9FQK Ngw349zOz2w19JmJw9VMEvDJNRuR1XY4CwyYh189V_ON.VuRpH3HZTJ1hgYrCmXmdI8J0kHYRXVi xcFLZqGn30WbzC9Ra6wnfJITMRyg56UlkfqGd6is9lN4coL6z0IzYU52Yp.R2fCmXjhVECJgR04h Mg2lqT297h9oiS6XOr7D6bt0HDp8_tBpYQrxsxvKdKn1rmzEWL5h2tMu.5RaWHtja.nrDOz4Pqld 3rQkqFAuQt0TWU6njQWnHiJfKeLRN6jWDQBBCKTBkd_yG4pXKW8_BkU93EVr0l8jTmtmPjBtchHv QboQ3WHpEFX7v9rmJLeP93Ot28lG5W5FX.MpcccPo4FOyTcss8VUoNnVfZHVvK72jQEhFlb5OPir 7upTjd44SQf4I7M7P4DRswwnkxMZJsYL6fJx4Dnvwq7_5RoEL8VShN9onpWgBaNXYpnU3QAND.G4 cDZEzGTnJ8_NrWYuQ0lxPsIAg5EE6QDUaiEhuckru Received: from sonic.gate.mail.ne1.yahoo.com by sonic301.consmr.mail.ne1.yahoo.com with HTTP; Wed, 10 Apr 2019 02:07:51 +0000 Date: Wed, 10 Apr 2019 02:06:31 +0000 (UTC) From: Roshan Naik To: Message-ID: <133820313.882167.1554861991293@mail.yahoo.com> In-Reply-To: References: Subject: Re: Computing available parallelism units for storm executors MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_882166_481854937.1554861991292" X-Mailer: WebService/1.1.13212 YahooMailIosMobile Raven/44290 CFNetwork/976 Darwin/18.2.0 ------=_Part_882166_481854937.1554861991292 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Yes that is basically right. If indeed your executors =C2=A0are processing = enough data to be utilizing nearly all of cpu then you should consider 1 ex= ecutor per core. May be a good idea to budget 10-20% cpu per machine for ot= her stuff including interactive login.=C2=A0 Sent from Yahoo Mail for iPhone On Tuesday, April 9, 2019, 2:25 PM, Jayant Sharma wrote: Hi, I was going through=C2=A0P. Taylor Goetz's talk on scaling storm cluster (2= 014). He made an interesting point of computing available parallelism units= in a cluster. For CPU bound applications this limit was 1 executor/CPU cor= e. I just wanted to be sure my understanding about this is clear: If I have 3 supervisor machines, each having 5 workers or JVMs and each mac= hine has 16 CPU cores. I have 3*16 =3D 48 parallelism units to distribute a= mong all of my topologies? Which means the sum of all spouts and bolts exec= utors across all the topologies should be 48. What are the implications if = I keep my executors more or less than this value? If my understanding is incorrect, can someone please explain how to compute= parallelism units and relate them to number of executors. Thanks,Jayant Sharma ------=_Part_882166_481854937.1554861991292 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Yes that is basically right. If indeed your executors  are processing = enough data to be utilizing nearly all of cpu then you should consider 1 ex= ecutor per core. May be a good idea to budget 10-20% cpu per machine for ot= her stuff including interactive login. 



Sent from Yahoo Mail for iPhone

On Tuesday, April 9, 2019, 2:25 = PM, Jayant Sharma <sharmajayant27@gmail.com> wrote:

Hi,

I was going through P. Taylor Goetz's tal= k on scaling storm cluster (2014). He made an interesting point of computin= g available parallelism units in a cluster. For CPU bound applications this= limit was 1 executor/CPU core. I just wanted to be sure my understanding a= bout this is clear:

If I have 3 supervisor machine= s, each having 5 workers or JVMs and each machine has 16 CPU cores. I have = 3*16 =3D 48 parallelism units to distribute among all of my topologies? Whi= ch means the sum of all spouts and bolts executors across all the topologie= s should be 48. What are the implications if I keep my executors more or le= ss than this value?

If my understanding is incorre= ct, can someone please explain how to compute parallelism units and relate = them to number of executors.

Thanks,
Jay= ant Sharma
------=_Part_882166_481854937.1554861991292--