Return-Path: X-Original-To: apmail-river-dev-archive@www.apache.org Delivered-To: apmail-river-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B42411890E for ; Sun, 6 Dec 2015 22:10:06 +0000 (UTC) Received: (qmail 51259 invoked by uid 500); 6 Dec 2015 22:10:06 -0000 Delivered-To: apmail-river-dev-archive@river.apache.org Received: (qmail 51193 invoked by uid 500); 6 Dec 2015 22:10:06 -0000 Mailing-List: contact dev-help@river.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@river.apache.org Delivered-To: mailing list dev@river.apache.org Received: (qmail 51182 invoked by uid 99); 6 Dec 2015 22:10:06 -0000 Received: from Unknown (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 06 Dec 2015 22:10:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id BA503C08BA for ; Sun, 6 Dec 2015 22:10:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.552 X-Spam-Level: X-Spam-Status: No, score=-0.552 tagged_above=-999 required=6.31 tests=[HEADER_FROM_DIFFERENT_DOMAINS=0.001, MIME_QP_LONG_LINE=0.001, RP_MATCHES_RCVD=-0.554, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id OsjygvHvKUEY for ; Sun, 6 Dec 2015 22:09:54 +0000 (UTC) Received: from walmailout04.yourhostingaccount.com (walmailout04.yourhostingaccount.com [65.254.253.34]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id CB9022059E for ; Sun, 6 Dec 2015 22:09:53 +0000 (UTC) Received: from mailscan07.yourhostingaccount.com ([10.1.15.7] helo=walmailscan07.yourhostingaccount.com) by walmailout04.yourhostingaccount.com with esmtp (Exim) id 1a5hV5-0005p1-86 for dev@river.apache.org; Sun, 06 Dec 2015 17:09:47 -0500 Received: from [10.114.3.33] (helo=walimpout13) by walmailscan07.yourhostingaccount.com with esmtp (Exim) id 1a5hV4-0007Ew-UM for dev@river.apache.org; Sun, 06 Dec 2015 17:09:46 -0500 Received: from walauthsmtp05.yourhostingaccount.com ([10.1.18.5]) by walimpout13 with id qN9j1r00506ZpSa01N9mdf; Sun, 06 Dec 2015 17:09:46 -0500 X-Authority-Analysis: v=2.1 cv=fagjyigF c=1 sm=1 tr=0 a=9O1N9SI/4ZbXZMy1Cl+5fA==:117 a=gaabasfxUCjC5o7HHt4rcg==:17 a=pq4jwCggAAAA:8 a=OF-CdTOGAAAA:8 a=HCB_ZTjGAAAA:8 a=AOeTiE0eja0A:10 a=kj9zAlcOel0A:10 a=wUQvQvOEmiQA:10 a=0LiwH3idAAAA:8 a=kviXuzpPAAAA:8 a=mV9VRH-2AAAA:8 a=13bNzC8PAAAA:8 a=Hh2W1mSx045pe25mBYkA:9 a=CjuIK1q_8ugA:10 Received: from [91.140.235.158] (port=7227 helo=[172.22.2.141]) by walauthsmtp05.yourhostingaccount.com with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256) (Exim) id 1a5hUz-00012W-Jk for dev@river.apache.org; Sun, 06 Dec 2015 17:09:43 -0500 From: Gregg Wonderly Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 (1.0) Subject: Re: Trunk merge and thread pools Message-Id: Date: Mon, 7 Dec 2015 01:09:36 +0300 References: <0b65910a77586d2c68419f3c32a92980@org.tizen.email> In-Reply-To: <0b65910a77586d2c68419f3c32a92980@org.tizen.email> To: dev@river.apache.org X-Mailer: iPad Mail (13B143) X-EN-UserInfo: 24f77bb6787a3c6fdcb90c6ee26c5426:931c98230c6409dcc37fa7e93b490c27 X-EN-AuthUser: greggwon@pop.powweb.com Sender: Gregg Wonderly X-EN-OrigIP: 91.140.235.158 X-EN-OrigHost: unknown Well Peter, there are lots of things one can do about load management. The o= bvious solutions are visible in current load balancing on web servers. That= simple mechanism of receiving the request and dispatching it into the real s= ervers provides the ability to manage load with appropriate logic. So, put your slowest hardware there, use a small fixed sized dispatch pool a= nd tune its size to an appropriate percent of available time. That is, time= each service requests time to process. Bias those times by appropriate var= iation in processing time differences. As Amazon does, you can use a PID mechanism to automate throttling. Gregg Sent from my iPad > On Dec 3, 2015, at 3:32 PM, Peter wrote: >=20 > Care to share more of your insight? >=20 > Peter. >=20 > Sent from my Samsung device. > Include original message > ---- Original message ---- > From: Gregg Wonderly > Sent: 03/12/2015 06:37:15 pm > To: dev@river.apache.org > Subject: Re: Trunk merge and thread pool >=20 > The original use of thread pooling was more than likely about getting wor= k done faster by not undergoing overhead of thread creation, since in distri= buted systems, deferring work can create deadlock by introducing indefinite w= ait scenarios if resource limits keep work from being dispatched.=20 >=20 > As a general rule of thumb, I have found that waiting till the point of th= read creation, to create introduce load control, is never the right design. = Instead, load control must happen at the head/beginning of any request into= a distributed system.=20 >=20 > Gregg=20 >=20 > Sent from my iPhone=20 >=20 >> On Dec 3, 2015, at 3:26 AM, Peter wrote:=20 >> =20 >> Just tried wrapping an Executors.newCachedThreadPool with a thread facto= ry that creates threads as per the original org.apache.river.thread.NewThrea= dAction.=20 >> =20 >> Performance is much improved, the hotspot is gone.=20 >> =20 >> There are regression tests with sun bug Id's, which cause oome. I thoug= ht this might =20 >> prevent the executor from running, but to my surprise both tests pass. = These tests failed when I didn't pool threads and just let them be gc'd. T= hese tests created over 11000 threads with waiting tasks. In practise I wou= ldn't expect that to happen as an IOException should be thrown. However the= re are sun bug id's 6313626 and 6304782 for these regression tests, if anyon= e has a record of these bugs or any information they can share, it would be m= uch appreciated.=20 >> =20 >> It's worth noting that the jvm memory options should be tuned properly t= o avoid oome in any case.=20 >> =20 >> Lesson here is, creating threads and gc'ing them is much faster than thr= ead pooling if your thread pool is not well optimised.=20 >> =20 >> It's worth noting that ObjectInputStream is now the hotspot for the test= , the tested code's hotspots are DatagramSocket and SocketInputStream.=20 >> =20 >> ClassLoading is thread confined, there's a lot of class loading going on= , but because it is uncontended, it only consumes 0.2% cpu, about the same a= s our security architecture overhead (non encrypted).=20 >> =20 >> Regards,=20 >> =20 >> Peter.=20 >> =20 >> Sent from my Samsung device.=20 >> Include original message=20 >> ---- Original message ----=20 >> From: Bryan Thompson =20 >> Sent: 02/12/2015 11:25:03 pm=20 >> To: =20 >> Subject: Re: Trunk merge and thread pools=20 >> =20 >> Ah. I did not realize that we were discussing a river specific ThreadPoo= l =20 >> vs a Java Concurrency classes ThreadPoolExecutor. I assume that it woul= d =20 >> be difficult to just substitute in one of the standard executors? =20 >> =20 >> Bryan =20 >> =20 >>> On Wed, Dec 2, 2015 at 8:18 AM, Peter wrote: =20 >>> =20 >>> First it's worth considering we have a very suboptimal threadpool. Th= ere =20 >>> are qa and jtreg tests that limit our ability to do much with ThreadPo= ol. =20 >>> =20 >>> There are only two instances of ThreadPool, shared by various jeri =20= >>> endpoint implementations, and other components. =20 >>> =20 >>> The implementation is allowed to create numerous threads, only limited= by =20 >>> available memory and oome. At least two tests cause it to create over= =20 >>> 11000 threads. =20 >>> =20 >>> Also, it previously used a LinkedList queue, but now uses a =20 >>> BlockingQueue, however the queue still uses poll, not take. =20 >>> =20 >>> The limitation seems to be the concern by the original developers that= =20 >>> there may be interdependencies between tasks. Most tasks are method =20= >>> invocations from incoming and outgoing remote calls. =20 >>> =20 >>> It probably warrants further investigation to see if there's a suitabl= e =20 >>> replacement. =20 >>> =20 >>> Regards, =20 >>> =20 >>> Peter. =20 >>> =20 >>> Sent from my Samsung device. =20 >>> Include original message =20 >>> ---- Original message ---- =20 >>> From: Bryan Thompson =20 >>> Sent: 02/12/2015 09:46:13 am =20 >>> To: =20 >>> Subject: Re: Trunk merge and thread pools =20 >>> =20 >>> Peter, =20 >>> =20 >>> It might be worth taking this observation about the thread pool behavi= or to =20 >>> the java concurrency list. See what feedback you get. I would certai= nly =20 >>> be interested in what people there have to say about this. =20 >>> =20 >>> Bryan >=20