what can you tell me about you f-score annotations? I'm assuming, of
course. are you writng annotators to calculate f-scores from medical
texts?
Best Regards,
KAMERON ARTHUR Miami Beach, FL
COLE
Technical United States
Solution
Architect
IBM Content and
Predictive
Analytics for
Healthcare
IBM Global
Business Services
Center of
Excellence
IBM US Federal
Buy My Book
E-mail: kameroncole@us.ibm.com
Work (cell): +1-305-389-8512
Fax: +1-845-491-4052
Twitter: @kameroncoleibm
My Blog: Enterprise Linguistics
Buy My Book
Yasen Kiprov
<yasenkiprov@yaho
o.com> To
"user@uima.apache.org"
11/05/2012 09:11 <user@uima.apache.org>
AM cc
Subject
Please respond to f-score evaluation tool?
user@uima.apache.
org
Hello,
I'm trying to setup a test environment where I can compare collections of
annotated documents in terms of precision, recall and f-scores. Is there
any easy-to-use tool for comparing analysed documents in the available UIMA
xml formats?
I'm familiar with the GATE corpus evaluation tools so a CAS consumer which
outputs documents in the GATE xml format could also be a solution. Does
anyone know about such an open-source tool?
Thank you and all the best,
Yasen
|
Mime |
- Unnamed multipart/related (inline, None, 0 bytes)
- Unnamed multipart/alternative (inline, None, 0 bytes)
|