lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Muir <rcm...@gmail.com>
Subject Re: TREC-3 Runs
Date Sat, 13 Mar 2010 12:38:47 GMT
On Fri, Mar 12, 2010 at 11:01 AM, Ivan Provalov <iprovalo@yahoo.com> wrote:
> Just to follow up on our previous discussion, here are a few runs in which we have tested
some of the Lucene different scoring mechanisms and other options.  We used Lucene's patches
for LnbLtcSimilarity and BM25 and contrib module for the SweetSpotSimilarity.
>
> Lucene Default: 0.149
> Lucene BM25:    0.168
> SweetSpotSimilarity (Min: 10; Max: 1000; Steepness: 0.2): 0.173
> LnbLtcSimilarity (Pivot Norm + TF Default; Avg # of Terms: 450; slope: 0.25):   0.184
> LnbLtcSimilarity (Pivot Norm + TF Log; Avg # of Terms: 450; slope: 0.25):       0.186
> Lucene With Stemmer: 0.202
> Lucene With Lexical Affinities + Phrase Expansion + Stemmer: 0.21

Ivan, thanks for reporting back. Its more evidence that its worth our
trouble to support additional scoring models.

-- 
Robert Muir
rcmuir@gmail.com

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message