commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rob Eamon (JIRA)" <>
Subject [jira] Commented: (POOL-86) GenericKeyedObjectPool retaining too many idle objects
Date Wed, 04 Apr 2007 18:34:32 GMT


Rob Eamon commented on POOL-86:

With respect to all the constraints on what can be done to the code, the resolution to this
issue feels like a hack to me.

>From Sandy's earlier post on Oct 29:

1. The reason to use a pool is that you have reusable objects that are expensive to create
or dispose of. Using a FIFO ordering (LRU) means you use all the pooled objects over time
instead of just a handful. Idle object will be kept fresh and ready to reuse or they will
fail validateObject and be discarded. LIFO ordering (MRU) will tend to reuse the same objects
and others will rarely be used under peak load, unless you have an idle object evictor to
check their reusability ...

2. Using an idle object evictor can cause deadlocks. If the PoolableObjectFactory uses synchronization
and the pool is accessed in a synchronized context it can lead to locks being acquired in
the opposite order which creates a deadlock possible race condition and thread-safty cannot
be guaranteed. Basically, avoid idle object eviction if you can. 

3. A FIFO pool is implicitly optimized to be prepared to handle the next load spike. And that
is the point of the pool. Optimizing the pool for low load levels reduces the important advantage
of using a pool, being prepared for lots of work.

1. We all agree on the purpose of the pool--to have idle objects ready to go when they are
needed. Often it is also necessary to limit the number of objects in the pool. Where there
is disagreement is on how the pool should behave.

If initial performance under peak load is paramount (want to have enough idle objects on hand),
then the existing configuration parameters permit doing that. One sets the minIdle value and
the testWhileIdle appropriately and one can assure having enough pooled objects to handle
periodic usage bursts.

However, if one wants the pool to grow and shrink according to usage and wants the number
of idle objects to quiesce when usage subsides (e.g. no sense consuming a client license when
it isn't being used), then there is little that can be done if the pool is FIFO. As Mike points
out, one or two threads could keep all pool objects "fresh" even when they aren't needed any

2. Use of an evictor thread can definitely cause deadlocks--if the factory uses client locks.
If it doesn, then it doesn't matter. And as Mike points out, if the programmer knows the behavior
of the pool, then they can presumably design around that to avoid deadlock.

Are you saying that the evictor thread should never have been implemented? Is it going to
be deprecated? What would you do as an alternative, if you had a clean slate?

3. Having the pool large enough to handle peak load is but one of many constraints that the
pool concept is designed to address. As mentioned above,  if one needs a minimum pool size
at the ready for peak loads this can be managed using configuration. But if one doesn't need
that and insteads needs the pool to grow and shrink using the max/min sizes and idle timeouts
then the current implementation inhibits shrinking of the pool.

Wrapping the pool classes with the eroding class seems like a stop-gap added to address something
that should be part of the core pool implementation. While throwing away the object being
returned vs throwing away an idle and expired object from the pool is a wash, it seems useful
for the pool implementation to have some sort of mechanism to purge expired objects from the
pool based simply on idle time. How that is implemented, I don't have much preference, but
the current implementation clearly needs a tweak.

I respectfully request that this solution be reconsidered.

> GenericKeyedObjectPool retaining too many idle objects
> ------------------------------------------------------
>                 Key: POOL-86
>                 URL:
>             Project: Commons Pool
>          Issue Type: Bug
>    Affects Versions: 1.3
>            Reporter: Mike Martin
>         Assigned To: Sandy McArthur
>             Fix For: 2.0
>         Attachments: pool-86.patch, pool-86.withtest.patch
> There are two somewhat related problems in GenericKeyedObjectPool that cause
> many more idle objects to be retained than should be, for much longer than they
> should be.
> Firstly, borrowObject() is returning the LRU object rather than the MRU object.
> That minimizes rather than maximizes object reuse and tends to refresh all the
> idle objects, preventing them from becoming evictable.
> The idle LinkedList is being maintained with:
>     borrowObject:   list.removeFirst()
>     returnObject:   list.addLast()
> These should either both be ...First() or both ...Last() so the list maintains
> a newer-to-older, or vice-versa, ordering.  The code in evict() works from the
> end of the list which indicates newer-to-older might have been originally
> intended.
> Secondly, evict() itself has a couple of problems, both of which only show up
> when many keys are in play:
> 1.  Once it processes a key it doesn't advance to the next key.
> 2.  _evictLastIndex is not working as documented ("Position in the _pool where
>     the _evictor last stopped").  Instead it's the position where the last scan
>     started, and becomes the position at which it attempts to start scanning
>     *in the next pool*.  That just causes objects eligible for eviction to
>     sometimes be skipped entirely.
> Here's a patch fixing both problems:
> 990c990
> <             pool.addLast(new ObjectTimestampPair(obj));
> ---
> >             pool.addFirst(new ObjectTimestampPair(obj));
> 1094,1102c1094,1095
> <                 }
> <
> <                 // if we don't have a keyed object pool iterator
> <                 if (objIter == null) {
> <                     final LinkedList list = (LinkedList)_poolMap.get(key);
> <                     if (_evictLastIndex < 0 || _evictLastIndex > list.size())
> <                         _evictLastIndex = list.size();
> <                     }
> <                     objIter = list.listIterator(_evictLastIndex);
> ---
> >                     LinkedList list = (LinkedList)_poolMap.get(key);
> >                     objIter = list.listIterator(list.size());
> 1154,1155c1147
> <                     _evictLastIndex = -1;
> <                     objIter = null;
> ---
> >                     key = null;
> 1547,1551d1538
> <
> <     /**
> <      * Position in the _pool where the _evictor last stopped.
> <      */
> <     private int _evictLastIndex = -1;
> I have a local unit test for this but it depends on some other code I can't
> donate.  It works like this:
> 1.  Fill the pool with _maxTotal objects using many different keys.
> 2.  Select X as a small number, e.g. 2.
> 3.  Compute:
>         maxEvictionRunsNeeded = (maxTotal - X) / numTestsPerEvictionRun + 2;
>         maxEvictionTime = minEvictableIdleTimeMillis + maxEvictionRunsNeeded * timeBetweenEvictionRunsMillis;
> 4.  Enter a loop:
>     a.  Borrow X objects.
>     b.  Exit if _totalIdle = 0
>     c.  Return the X objects.
> Fail if loop doesn't exit within maxEvictionTime.
> Mike

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message