lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eks Dev (Commented) (JIRA)" <>
Subject [jira] [Commented] (LUCENE-3841) CloseableThreadLocal does not work well with Tomcat thread pooling
Date Sat, 03 Mar 2012 16:31:57 GMT


Eks Dev commented on LUCENE-3841:

This is indeed a problem. Recently we moved to solr on tomcat and we hit it, slightly different

The nature of the problem is in high thread churn on tomcat, and when combined with expensive
analyzers it wracks gc() havoc (*even without stale ClosableThreadLocals from this issue*).
We are attacking this problem currently by reducing maxThreads and increasing minSpareThreads
(also reducing time to forced thread renew). The goal is to increase life-time of threads,
and to contain them to reasonable limits. I would appreciate any tips into this direction.

The problem with this strategy is if some cheep requests, not really related to your search
saturate smallish thread pool... I am looking for a way to define separate thread pools for
search/update requests and one for the rest as it does not make sense to have 100 search threads
searching lucene on dual core box. Not really experienced with tomcat... 

Of course, keeping Analyzer creation cheep helps(e.g. make expensive, background structures
thread-safe that can be shared and only thin analyzer using them). But this is not always

Just sharing experience here, maybe someone finds it helpful. Hints always welcome :)

> CloseableThreadLocal does not work well with Tomcat thread pooling
> ------------------------------------------------------------------
>                 Key: LUCENE-3841
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: core/other
>    Affects Versions: 3.5
>         Environment: Lucene/Tika/Snowball running in a Tomcat web application
>            Reporter: Matthew Bellew
> We tracked down a large memory leak (effectively a leak anyway) caused
> by how Analyzer users CloseableThreadLocal.
> CloseableThreadLocal.hardRefs holds references to Thread objects as
> keys.  The problem is that it only frees these references in the set()
> method, and SnowballAnalyzer will only call set() when it is used by a
> NEW thread.
> The problem scenario is as follows:
> The server experiences a spike in usage (say by robots or whatever)
> and many threads are created and referenced by
> CloseableThreadLocal.hardRefs.  The server quiesces and lets many of
> these threads expire normally.  Now we have a smaller, but adequate
> thread pool.  So CloseableThreadLocal.set() may not be called by
> SnowBallAnalyzer (via Analyzer) for a _long_ time.  The purge code is
> never called, and these threads along with their thread local storage
> (lucene related or not) is never cleaned up.
> I think calling the purge code in both get() and set() would have
> avoided this problem, but is potentially expensive.  Perhaps using 
> WeakHashMap instead of HashMap may also have helped.  WeakHashMap 
> purges on get() and set().  So this might be an efficient way to
> clean up threads in get(), while set() might do the more expensive
> Map.keySet() iteration.
> Our current work around is to not share SnowBallAnalyzer instances
> among HTTP searcher threads.  We open and close one on every request.
> Thanks,
> Matt

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message