lucene-openrelevance-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Omar Alonso <>
Subject Re: OpenRelevance and crowdsourcing
Date Wed, 21 Oct 2009 14:03:57 GMT
> While I realize $100 isn't a lot, we simply don't have a
> budget for such experiments and the point of ORP is to be
> able do this in the community.  I suppose we could ask
> the ASF board for the money, but I don't think we are ready
> for that anyway.  I very much have a "If you build it,
> they will come" mentality, so I know if we can just get
> bootstrapped with some data and some queries and a way to
> collect their judgments, we can get people interested.

I'm not defending MTurk but it gives you a "world view" in terms of assessments versus a specific
community. You can run the test within the community but you may also introduce bias in the
experiment. There is a paper from Ellen Voorhees on SIGIR where she shows different agreement
levels between NIST and U. Waterloo assessors.

You can still do a closed HIT (Human Intelligence Task) that pays $0 cent and is by invitation
only. You probably need to pay Amazon something for hosting the experiment but that would
reduce the cost dramatically. Of course, only the community would have access to it not all
workers on MTurk.

If you want to build everything that is possible too. You can have a website that collects
judgments for a set of query/docs. 

The INEX folks do the assessments on a volunteer basis but it takes quite a bit of time.

In any case, MTurk or not MTurk, I have some spare cycles in case people are interested in
trying ideas.




View raw message