directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Fran├žois Daune <>
Subject Re: [mina] threadpools and blocking operations in mina
Date Tue, 18 Oct 2005 17:54:50 GMT
Alex Burmester wrote:

>Hi all, I have a server built on mina that is currently a fully 
>asynchronous message router for an in house protocol.  It's fast and it 
>works very well.  I need to add a bit of complexity to it unfortunately
>and I want to make sure that I don't screw up the fast and working well 
>part of it ;-)
>For a subset of my incomming messages I need to add a database lookup 
>which unfortunately involves a blocking jdbc call.  I'm planning to use an 
>apache commons connection pool.
>My server currently uses the SimpleServiceRegistry so it gets the default 
>thread pools for io and protocol.  I was originally thinking of adding a 
>third worker thread pool and a queue to process the messages that need the 
>database lookup.  Then I thought if I just keep track of the number of 
>outstanding database lookups and keep that number within a limit and 
>reject any messages beyond the limit with something like a queue full 
>message.  This should allow me to use the protocol thread pool to also 
>process database lookups without starving the protocol thread pool in the 
>event of database slowness.
>Just wondering if anyone has any thoughts one way or the other.
>(separate worker thread pool for database ops or keep it simple)
Hi Alex,

I faced exactly the same situation, and I decided to make my persistent 
operations within the protocol thread pool.

I considered that the extra complexity brought by another thread pool 
was not worth (I haven't long running database operations)
I also increased the nr of protocol threads compared to I/O threads.

But your solution is SEDA-compliant, not mine.

Also, beware of thread CPU allocation. If the DB thread pool has much 
less threads than your protocol thread pool, you could risk timeouts or 
your queue quickly reaching maximum size.



View raw message