lucene-openrelevance-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Grant Ingersoll <gsing...@apache.org>
Subject Re: Calculating a search engine's MAP
Date Thu, 14 Jan 2010 18:19:47 GMT

On Jan 14, 2010, at 4:32 AM, Ludovico Boratto wrote:

> Hi everyone,
> sorry if I'm bothering you, but I really can't get out of this problem.

Never a bother, that's what this list is for.

> I have my search engine developed, but I don't know how to test it.
> Let me briefly introduce you how it works...
> 
> My algorithm is based on implicit feedbacks. A feedback is collected each time a user
finds a relevant resource during a search in a tagging system.
> The algorithm uses the feedback to dynamically strengthen associations between the resource
indicated by the user and the keywords used in the search string. Keyword-resource associations
are used by the algorithm to rank the results.
> 
> I have been looking for ages for a proper dataset that would work with my algorithm.
> I was thinking about using TREC's 2008 Relevance Feedback dataset:
> http://trec-relfeed.googlegroups.com/web/Guidelines08?gda=gZ0eUT4AAABtm9akyKg9pgh0qJJTHfy7X57I390rHU2uANbDSEOX3Kddn9WBc2Ae6sNICG8Kz2zjsKXVs-X7bdXZc5buSfmx
> As you can see from the document, for each query, one (or more) relevance feedbacks is
given (i.e. one or more relevant results).
> The thing is: how can I evaluate the quality of the ranking produced by my system?

If you are using a TREC dataset, it should come with judgments and queries.  You submit the
queries to your system, get back the results, format them per the TREC guidelines (trec_eval)
and then run trec_eval on them.

Cheers,
Grant
Mime
View raw message