Return-Path: X-Original-To: apmail-storm-user-archive@minotaur.apache.org Delivered-To: apmail-storm-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3550F1064E for ; Thu, 6 Feb 2014 11:20:00 +0000 (UTC) Received: (qmail 63903 invoked by uid 500); 6 Feb 2014 11:19:59 -0000 Delivered-To: apmail-storm-user-archive@storm.apache.org Received: (qmail 62773 invoked by uid 500); 6 Feb 2014 11:19:55 -0000 Mailing-List: contact user-help@storm.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@storm.incubator.apache.org Delivered-To: mailing list user@storm.incubator.apache.org Received: (qmail 62749 invoked by uid 99); 6 Feb 2014 11:19:53 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Feb 2014 11:19:53 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of vaillant@animetrics.com designates 74.125.149.84 as permitted sender) Received: from [74.125.149.84] (HELO na3sys009aog135.obsmtp.com) (74.125.149.84) by apache.org (qpsmtpd/0.29) with SMTP; Thu, 06 Feb 2014 11:19:46 +0000 Received: from mail-qa0-f47.google.com ([209.85.216.47]) (using TLSv1) by na3sys009aob135.postini.com ([74.125.148.12]) with SMTP ID DSNKUvNvvGJpnu/F2e83vBOaEwzX3Unw9YDj@postini.com; Thu, 06 Feb 2014 03:19:26 PST Received: by mail-qa0-f47.google.com with SMTP id j5so2466521qaq.20 for ; Thu, 06 Feb 2014 03:19:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=T+TcdwY0jGW0YZcSxiP+GuHR4bnC8vGQBFzwXViG05U=; b=AJGxULfaVVeogO4I1FhMnGrdP3i6AdI1ilqw6o6B3c+6Gnp/ges5jGlvWgf46Vp6/V lyRKlzu3qitLiHgoBZXvvJJ3C9JG0vGSWeO7b6WG4yiTtk8KXMpRNsXfmNG2i9Ib9jDJ tm7BsL2I4KpYi9fg5C3SwvTx7nUS5MUUHqS8ktaVDQUWIRb3AwKqw4WZkVoCOd07D5ua xLvkiqlr348z/AjoXu8bb6UNKDrybcum2JlgzTasHgOzGfu9YS/Hbxi1KVES5GYZsOFT Cgx7c5LopNBX/1pWHwBQcWYvgi2TOa8qsJfdRBdxSQyDHPmq+330lSLoYU3AysGxzFK1 U8EA== X-Gm-Message-State: ALoCoQkYlI10WJPeZb10y+B3cwC+kAyGQ4Tq/kt2IZDNszZ26WqNWijiB6URd0yxd74O0bcAqEf1JCwBrDHaW2YXVToypTLhtR9a2pGOMFs3CsTLCRyMmJD8vjJUDzfa4qmBzQ8wuff0OpMlIHpH0xnXNSKmP1J+FoltG5AtC4chOhrm02pewM2baoHL2FG72Taw3GULDEfK X-Received: by 10.229.119.73 with SMTP id y9mr11423075qcq.18.1391685563537; Thu, 06 Feb 2014 03:19:23 -0800 (PST) X-Received: by 10.229.119.73 with SMTP id y9mr11423008qcq.18.1391685562900; Thu, 06 Feb 2014 03:19:22 -0800 (PST) Received: from animetrics.com (cpe-67-255-203-32.maine.res.rr.com. [67.255.203.32]) by mx.google.com with ESMTPSA id r13sm1587757qan.7.2014.02.06.03.19.21 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Feb 2014 03:19:22 -0800 (PST) Date: Thu, 6 Feb 2014 06:19:19 -0500 From: Marc Vaillant To: user@storm.incubator.apache.org Subject: Re: Can a topology be configured to force a maximum of 1 executor per worker? Message-ID: <20140206111919.GA24383@animetrics.com> References: <20140205160633.GE2505@animetrics.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Virus-Checked: Checked by ClamAV on apache.org Thanks Bijoy, I will test that. Marc On Wed, Feb 05, 2014 at 09:45:45PM +0530, bijoy deb wrote: > Hi Marc, > > I believe keeping the total number of executors(i.e. parallelism) across all > the components(bolts,spouts) to be less than or equal to the total number of > workers can be one way to achieve this. > > Thanks > Bijoy > > > On Wed, Feb 5, 2014 at 9:36 PM, Marc Vaillant wrote: > > Suppose that you have a bolt whose tasks are not thread safe but you > still want parallelism. It seems that this could be achieved via > multiprocessing by forcing a maximium of 1 executor per worker. With > this constraint, if you chose a parallelism hint of 4 (with default > executors) you would get 4 tasks in 4 executors each running in a > separate worker. Can this constraint be configured? > > Thanks, > Marc > >