lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Busch (JIRA)" <>
Subject [jira] Updated: (LUCENE-1195) Performance improvement for TermInfosReader
Date Wed, 27 Feb 2008 19:31:51 GMT


Michael Busch updated LUCENE-1195:

    Attachment: lucene-1195.patch

Here is the simple patch. The cache is only used in TermInfosReader.get(Term).

So if for example a RangeQuery gets a TermEnum from the IndexReader, then
enumerating the terms using the TermEnum will not replace the terms in the

The LRUCache itself is not synchronized. It might happen that multiple 
threads lookup the same term at the same time, then we might get an cache 
miss. But I think such a situation should be very rare, and it's therefore
better to avoid the synchronization overhead?

I set the default cache size to 1024. A cache entry is a (Term, TermInfo)
tuple. TermInfo needs 24 bytes, I think a Term approx. 20-30 bytes? So
the cache would need about 1024 * ~50 bytes = 50Kb plus a bit overhead
from the LinkedHashMap. This is the memory requirement per index segment,
so a non-optimized index with 20 segments would need about 1MB more memory
with this cache. I think this is acceptable? Otherwise we can also decrease
the cache size.

All core & contrib tests pass.

> Performance improvement for TermInfosReader
> -------------------------------------------
>                 Key: LUCENE-1195
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Michael Busch
>            Assignee: Michael Busch
>            Priority: Minor
>             Fix For: 2.4
>         Attachments: lucene-1195.patch
> Currently we have a bottleneck for multi-term queries: the dictionary lookup is being
> twice for each term. The first time in Similarity.idf(), where searcher.docFreq() is
> The second time when the posting list is opened (TermDocs or TermPositions).
> The dictionary lookup is not cheap, that's why a significant performance improvement
> possible here if we avoid the second lookup. An easy way to do this is to add a small
> cache to TermInfosReader. 
> I ran some performance experiments with an LRU cache size of 20, and an mid-size index
> 500,000 documents from wikipedia. Here are some test results:
> 50,000 AND queries with 3 terms each:
> old:                  152 secs
> new (with LRU cache): 112 secs (26% faster)
> 50,000 OR queries with 3 terms each:
> old:                  175 secs
> new (with LRU cache): 133 secs (24% faster)
> For bigger indexes this patch will probably have less impact, for smaller once more.
> I will attach a patch soon.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message