river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gregg Wonderly <gr...@wonderly.org>
Subject Re: Trunk merge and thread pools
Date Sun, 06 Dec 2015 22:09:36 GMT
Well Peter, there are lots of things one can do about load management.  The obvious solutions
are visible in current load balancing on web servers.  That simple mechanism of receiving
the request and dispatching it into the real servers provides the ability to manage load with
appropriate logic.

So, put your slowest hardware there, use a small fixed sized dispatch pool and tune its size
to an appropriate percent of available time.  That is, time each service requests time to
process.  Bias those times by appropriate variation in processing time differences.

As Amazon does, you can use a PID mechanism to automate throttling.


Sent from my iPad

> On Dec 3, 2015, at 3:32 PM, Peter <jini@zeus.net.au> wrote:
> Care to share more of your insight?
> Peter.
> Sent from my Samsung device.
>   Include original message
> ---- Original message ----
> From: Gregg Wonderly <gergg@cox.net>
> Sent: 03/12/2015 06:37:15 pm
> To: dev@river.apache.org
> Subject: Re: Trunk merge and thread pool
> The original use of thread  pooling was more than likely about getting work done faster
by not undergoing overhead of thread creation, since in distributed systems, deferring work
can create deadlock by introducing indefinite wait scenarios if resource limits keep work
from being dispatched. 
> As a general rule of thumb, I have found that waiting till the point of thread creation,
to create introduce load control, is never the right design.  Instead, load control must happen
at the head/beginning of any request into a distributed system. 
> Gregg 
> Sent from my iPhone 
>>  On Dec 3, 2015, at 3:26 AM, Peter <jini@zeusnet.au> wrote: 
>>  Just tried wrapping an Executors.newCachedThreadPool with a thread factory that
creates threads as per the original org.apache.river.thread.NewThreadAction. 
>>  Performance is much improved, the hotspot is gone. 
>>  There are regression tests with sun bug Id's, which cause oome.  I thought this
>>  prevent the executor from running,  but to my surprise both tests pass.   These
tests failed when I didn't pool threads and just let them be gc'd.  These tests created over
11000 threads with waiting tasks.  In practise I wouldn't expect that to happen as an IOException
should be thrown.  However there are sun bug id's 6313626 and 6304782 for these regression
tests, if anyone has a record of these bugs or any information they can share, it would be
much appreciated. 
>>  It's worth noting that the jvm memory options should be tuned properly to avoid
oome in any case. 
>>  Lesson here is, creating threads and gc'ing them is much faster than thread pooling
if your thread pool is not well optimised. 
>>  It's worth noting that ObjectInputStream is now the hotspot for the test, the tested
code's hotspots are DatagramSocket and SocketInputStream. 
>>  ClassLoading is thread confined, there's a lot of class loading going on, but because
it is uncontended, it only consumes 0.2% cpu, about the same as our security architecture
overhead (non encrypted). 
>>  Regards, 
>>  Peter. 
>>  Sent from my Samsung device. 
>>    Include original message 
>>  ---- Original message ---- 
>>  From: Bryan Thompson <bryan@systap.com> 
>>  Sent: 02/12/2015 11:25:03 pm 
>>  To: <dev@river.apache.org> <dev@river.apache.org> 
>>  Subject: Re: Trunk merge and thread pools 
>>  Ah. I did not realize that we were discussing a river specific ThreadPool  
>>  vs a Java Concurrency classes ThreadPoolExecutor.  I assume that it would  
>>  be difficult to just substitute in one of the standard executors?  
>>  Bryan  
>>>  On Wed, Dec 2, 2015 at 8:18 AM, Peter <jini@zeus.net.au> wrote:  
>>>   First it's worth considering we have a very suboptimal threadpool.  There 

>>>   are qa and jtreg tests that limit our ability to do much with ThreadPool. 

>>>   There are only two instances of ThreadPool, shared by various jeri  
>>>   endpoint implementations, and other components.  
>>>   The implementation is allowed to create numerous threads, only limited by 

>>>   available memory and oome.  At least two tests cause it to create over  
>>>   11000 threads.  
>>>   Also, it previously used a LinkedList queue,  but now uses a  
>>>   BlockingQueue,  however the queue still uses poll, not take.  
>>>   The limitation seems to be the concern by the original developers that  
>>>   there may be interdependencies between tasks.  Most tasks are method  
>>>   invocations from incoming and outgoing remote calls.  
>>>   It probably warrants further investigation to see if there's a suitable  
>>>   replacement.  
>>>   Regards,  
>>>   Peter.  
>>>   Sent from my Samsung device.  
>>>     Include original message  
>>>   ---- Original message ----  
>>>   From: Bryan Thompson <bryan@systap.com>  
>>>   Sent: 02/12/2015 09:46:13 am  
>>>   To: <dev@river.apache.org> <dev@river.apache.org>  
>>>   Subject: Re: Trunk merge and thread pools  
>>>   Peter,  
>>>   It might be worth taking this observation about the thread pool behavior to
>>>   the java concurrency list.  See what feedback you get.  I would certainly 

>>>   be interested in what people there have to say about this.  
>>>   Bryan

View raw message