opennlp-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Kottmann (JIRA) <>
Subject [jira] Commented: (OPENNLP-59) Bad precision using FMeasure
Date Wed, 19 Jan 2011 11:28:44 GMT


Jörn Kottmann commented on OPENNLP-59:

William, is the work on this issue done? I reviewed your changes and everything looks really
Since you opened the issue, it should be you who closes it. Thanks for fixing this.

> Bad precision using FMeasure
> ----------------------------
>                 Key: OPENNLP-59
>                 URL:
>             Project: OpenNLP
>          Issue Type: Bug
>    Affects Versions: tools-1.5.1-incubating
>            Reporter: William Colen
>            Assignee: William Colen
>             Fix For: tools-1.5.1-incubating
> I noticed bad precision in FMeasure results. I think the issue is that the current implementation
is summing divisions. It computes the precision and recall for every sample, and after adds
the results for each sample to compute the overall result. By doing that, the error related
to each division are summed and can impact the final result.
> I found the problem while implementing the ChunkerEvaluator. To verify the evaluator
I tried to compare the results we get using OpenNLP and the Perl script conlleval available
at The results were always different
if I process more than one sentence, because the implementation was using FMeasure.updateScores()
that was summing divisions.
> To solve that and have the same results provided by conll I basically stopped using the
Mean class.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message