lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arjen van der Meijden <acmmail...@tweakers.net>
Subject Improving search performance for forum search
Date Tue, 13 Nov 2012 07:36:14 GMT
Hi List,

I'm working on a search engine for our forum using Lucene 4. Since its a 
brand new search engine, I can change it as I see fit.

We have about 1.5M topics in the various subforums and on average 20 
replies to each topic (i.e. about 33M in total).
For now, I've opted to index all replies to topics and group the best 
reply-matches based on their topic-id and only keep the top X (currently 
at most 5 per topic).

This works quite well, but the search time is fairly long. It takes 
about 330ms to achieve a result with a single word that matches about 
45k of the topics. The index is on a ssd in my test-machine and the 
330ms is after repeated searches and including several other aspects.

Obviously, with an average of 20 replies per topic, that could actually 
be upwards to about 900k actual Documents being matched (I didn't look 
at the actual count, but it was probably less).

According to yourkit, about 50% of the time is spent in the Scorer and 
Collector. And it mainly breaks down to two aspects, my custom scoring 
and the fact that my code is set up to retrieve all results and do 
further processing. But given the grouping on the topic-id, I doubt I 
can actually escape that last part...

To enable customized scoring of the documents, I need access to 
per-reply and per-topic meta-data. The per-topic meta-data is stored in 
in-memory objects accessible via a HashMap based on the topic's id and 
the per-reply meta-data is simply a unix timestamp stored in a binary field.

A fair amount of the time (about 20% is spent in Reader.document(doc, 
StoredFieldVisitor)) is spent retrieving the topicId, replyId and that 
timestamp from the Document's. The topicId and replyId are encoded into 
a single binary field.
I already use a specialized StoredFieldVisitor that only retrieves those 
two binary fields from each document.

So now the questions:
- Can I reduce the overhead of retrieving the document's fields even 
further?
-- Should I use a different Codec (perhaps Pulsing or one of the "load 
the fielddata in memory"-codecs) to fetch those binary fields?
-- Should I change them to other field types?
-- Should I encode all binary data in a single field, rather than two 
fields (i.e. going from 9+8 bytes to 17)?
- Should I use a FieldCache to be able to retrieve the required fields 
quicker (and how do you even use a FieldCache??) once they've been read?
- Is there a way to delay or skip part of the scoring, so I can skip 
retrieving Documents altogether? This would probably require predicting 
that the results is intended for a topic which already has 5 very good 
replies, so that seems a bit far-fetched (although it would yield the 
most gain).

Any other tips?

Best regards,

Arjen van der Meijden
Tweakers.net B.V.

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message