lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Cutting <>
Subject Re: Caching filter wrapper (was Re: RE : DateFilter.Before/After)
Date Mon, 15 Sep 2003 18:01:11 GMT
Bruce Ritchie wrote:
> The source reason why I'm using multiple readers was that I was hitting 
> a synchronization issue with hits.doc(i) blocking across multiple 
> threads on a busy customer site causing searches to become slower and 
> slower as more searches were attempted simultaneously. I believe the 
> root cause was that SegmentReader.document(i) was synchronized (I could 
> be wrong, it's been a while), however I didn't have time to look into 
> the core code of Lucene when opening multiple readers was such a simple 
> solution and proved to solve the issue. Of course, now that I've got a 
> (bit) more time it might be worthwhile to investigate alternatives :)

If you do get a chance to look into this, I'd love to hear more.

FieldsReader.doc() could easily be re-written to be re-entrant.  For a 
start, it could synchronize separately on fieldStream and indexStream, 
which would let two threads use it at once.  (If an index is not 
optimized, the situtation would be even better, since there would be a 
fieldStream and indexStream per index segment.)

If that's not enough, then it could be re-written to use either a pool 
of cloned input streams, or just to clone a new stream for each call. 
(The primary expense of cloning a stream is allocating a 1k buffer.)



View raw message