Return-Path: X-Original-To: apmail-ctakes-dev-archive@www.apache.org Delivered-To: apmail-ctakes-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1CC5F1071E for ; Fri, 19 Dec 2014 18:16:22 +0000 (UTC) Received: (qmail 76489 invoked by uid 500); 19 Dec 2014 18:16:22 -0000 Delivered-To: apmail-ctakes-dev-archive@ctakes.apache.org Received: (qmail 76445 invoked by uid 500); 19 Dec 2014 18:16:21 -0000 Mailing-List: contact dev-help@ctakes.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ctakes.apache.org Delivered-To: mailing list dev@ctakes.apache.org Received: (qmail 75579 invoked by uid 99); 19 Dec 2014 18:16:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Dec 2014 18:16:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,T_REMOTE_IMAGE X-Spam-Check-By: apache.org Received-SPF: error (athena.apache.org: local policy) Received: from [74.125.82.49] (HELO mail-wg0-f49.google.com) (74.125.82.49) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Dec 2014 18:16:17 +0000 Received: by mail-wg0-f49.google.com with SMTP id n12so2009900wgh.8 for ; Fri, 19 Dec 2014 10:15:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=uoSF53Cvvwk0GfMS2FVc9K1svAh7+K7yQlkS+t98fwI=; b=bolERb00YWZgHe53nAVxkA2OID2DFn4EC8kmbt/blzI5JCQJUULTV9cGMIctJF8etq qC6XeFnH4a9jEU0Er+/llBDksTfH9JVN/swjb19XD3I3hdBGVYmQOXHQrjImiA5o/XF4 zj4yZMEaroePFEwFTsGQ/krgwG6hG/LnMyiIHkxjG41qmf8uqCW/LsOXrmckm8Z0j75A lhd/QXIfGoLzHfwqR8YdqACeZyekCVwb7/zJuFlODZq5Z+sobPEr3hUFISmtC8EM6tKq pFmZoPUrLz8W7bzVNAiOfa+Z1WRuRZTYqNCUkpJESFJf7jdpcVJJd7Q/LxYhewDSkn5K orCQ== X-Gm-Message-State: ALoCoQkwxikh6qsODNFEQEtPNbB7l0NUYE0U0mhn5t3O9BtZDhlJSLfMHf8sjq8ZXjuTDij0pBZ0 MIME-Version: 1.0 X-Received: by 10.194.156.202 with SMTP id wg10mr17321634wjb.59.1419012935740; Fri, 19 Dec 2014 10:15:35 -0800 (PST) Received: by 10.194.41.6 with HTTP; Fri, 19 Dec 2014 10:15:35 -0800 (PST) In-Reply-To: References: <924DE05C19409B438EB81DE683A942D916DB2B12@CHEXMBX1A.CHBOSTON.ORG> <5494427E.6000103@perfectsearchcorp.com> Date: Fri, 19 Dec 2014 11:15:35 -0700 Message-ID: Subject: Re: cTakes Annotation Comparison From: Bruce Tietjen To: dev@ctakes.apache.org Content-Type: multipart/alternative; boundary=089e0122ebd2f78763050a95b15a X-Virus-Checked: Checked by ClamAV on apache.org --089e0122ebd2f78763050a95b15a Content-Type: text/plain; charset=UTF-8 Our analysis against the human adjudicated gold standard from this SHARE corpus is using a simple check to see if the cTakes output included the annotation specified by the gold standard. The initial results I reported were for exact matches of CUI and text span. Only exact matches were counted. It looks like if we also count as matches cTakes annotations with a matching CUI and a text span that overlaps the gold standard text span then the matches increase to 224 matching annotations for the FastUMLS pipeline and 2319 for the the old pipeline. The question was also asked about annotations in the cTakes output that were not in the human adjudicated gold standard. The answer is yes, there were a lot of additional annotations made by cTakes that don't appear to be in the gold standard. We haven't analyzed that yet, but it looks like the gold standard we are using may only have Disease_Disorder annotations. [image: IMAT Solutions] Bruce Tietjen Senior Software Engineer [image: Mobile:] 801.634.1547 bruce.tietjen@imatsolutions.com On Fri, Dec 19, 2014 at 9:54 AM, Miller, Timothy < Timothy.Miller@childrens.harvard.edu> wrote: > > Thanks Kim, > This sounds interesting though I don't totally understand it. Are you > saying that extraction performance for a given note depends on which order > the note was in the processing queue? If so that's pretty bad! If you (or > anyone else who understands this issue) has a concrete example I think that > might help me understand what the problem is/was. > > Even though, as Pei mentioned, we are going to try moving the community to > the faster dictionary, I would like to understand better just to help > myself avoid issues of this type going forward (and verify the new > dictionary doesn't use similar logic). > > Also, when we finish annotating the sample notes, might we use that as a > point of comparison for the two dictionaries? That would get around the > issue that not everyone has access to the datasets we used for validation > and others are likely not able to share theirs either. And maybe we can > replicate the notes if we want to simulate the scenario Kim is talking > about with thousands or more notes. > > Tim > > > On 12/19/2014 10:24 AM, Kim Ebert wrote: > Guergana, > > I'm curious to the number of records that are in your gold standard sets, > or if your gold standard set was run through a long running cTAKES process. > I know at some point we fixed a bug in the old dictionary lookup that > caused the permutations to become corrupted over time. Typically this isn't > seen in the first few records, but over time as patterns are used the > permutations would become corrupted. This caused documents that were fed > through cTAKES more than once to have less codes returned than the first > time. > > For example, if a permutation of 4,2,3,1 was found, the permutation would > be corrupted to be 1,2,3,4. It would no longer be possible to detect > permutations of 4,2,3,1 until cTAKES was restarted. We got the fix in after > the cTAKES 3.2.0 release. https://issues.apache.org/jira/browse/CTAKES-310 > Depending upon the corpus size, I could see the permutation engine > eventually only have a single permutation of 1,2,3,4. > > Typically though, this isn't very easily detected in the first 100 or so > documents. > > We discovered this issue when we made cTAKES have consistent output of > codes in our system. > > [IMAT Solutions] > Kim Ebert > Software Engineer > [Office:] 801.669.7342 > kim.ebert@imatsolutions.com > On 12/19/2014 07:05 AM, Savova, Guergana wrote: > > We are doing a similar kind of evaluation and will report the results. > > Before we released the Fast lookup, we did a systematic evaluation across > three gold standard sets. We did not see the trend that Bruce reported > below. The P, R and F1 results from the old dictionary look up and the fast > one were similar. > > Thank you everyone! > --Guergana > > -----Original Message----- > From: David Kincaid [mailto:kincaid.dave@gmail.com] > Sent: Friday, December 19, 2014 9:02 AM > To: dev@ctakes.apache.org > Subject: Re: cTakes Annotation Comparison > > Thanks for this, Bruce! Very interesting work. It confirms what I've seen > in my small tests that I've done in a non-systematic way. Did you happen to > capture the number of false positives yet (annotations made by cTAKES that > are not in the human adjudicated standard)? I've seen a lot of dictionary > hits that are not actually entity mentions, but I haven't had a chance to > do a systematic analysis (we're working on our annotated gold standard > now). One great example is the antibiotic "Today". Every time the word > today appears in any text it is annotated as a medication mention when it > almost never is being used in that sense. > > These results by themselves are quite disappointing to me. Both the > UMLSProcessor and especially the FastUMLSProcessor seem to have pretty poor > recall. It seems like the trade off for more speed is a ten-fold (or more) > decrease in entity recognition. > > Thanks again for sharing your results with us. I think they are very > useful to the project. > > - Dave > > On Thu, Dec 18, 2014 at 5:06 PM, Bruce Tietjen < > bruce.tietjen@perfectsearchcorp.com bruce.tietjen@perfectsearchcorp.com>> wrote: > > > Actually, we are working on a similar tool to compare it to the human > adjudicated standard for the set we tested against. I didn't mention > it before because the tool isn't complete yet, but initial results for > the set (excluding those marked as "CUI-less") was as follows: > > Human adjudicated annotations: 4591 (excluding CUI-less) > > Annotations found matching the human adjudicated standard > UMLSProcessor 2245 > FastUMLSProcessor 215 > > > > > > > [image: IMAT Solutions] < > http://imatsolutions.com> Bruce Tietjen > Senior Software Engineer > [image: Mobile:] 801.634.1547 > bruce.tietjen@imatsolutions.com > > On Thu, Dec 18, 2014 at 3:37 PM, Chen, Pei > > > > wrote: > > > Bruce, > Thanks for this-- very useful. > Perhaps Sean Finan comment more- > but it's also probably worth it to compare to an adjudicated human > annotated gold standard. > > --Pei > > -----Original Message----- > From: Bruce Tietjen [mailto:bruce.tietjen@perfectsearchcorp.com] > Sent: Thursday, December 18, 2014 1:45 PM > To: dev@ctakes.apache.org > Subject: cTakes Annotation Comparison > > With the recent release of cTakes 3.2.1, we were very interested in > checking for any differences in annotations between using the > AggregatePlaintextUMLSProcessor pipeline and the > AggregatePlanetextFastUMLSProcessor pipeline within this release of > > > cTakes > > > with its associated set of UMLS resources. > > We chose to use the SHARE 14-a-b Training data that consists of 199 > documents (Discharge 61, ECG 54, Echo 42 and Radiology 42) as the > basis for the comparison. > > We decided to share a summary of the results with the development > community. > > Documents Processed: 199 > > Processing Time: > UMLSProcessor 2,439 seconds > FastUMLSProcessor 1,837 seconds > > Total Annotations Reported: > UMLSProcessor 20,365 annotations > FastUMLSProcessor 8,284 annotations > > > Annotation Comparisons: > Annotations common to both sets: 3,940 > Annotations reported only by the UMLSProcessor: 16,425 > Annotations reported only by the FastUMLSProcessor: 4,344 > > > If anyone is interested, following was our test procedure: > > We used the UIMA CPE to process the document set twice, once using > the AggregatePlaintextUMLSProcessor pipeline and once using the > AggregatePlaintextFastUMLSProcessor pipeline. We used the > WriteCAStoFile CAS consumer to write the results to output files. > > We used a tool we recently developed to analyze and compare the > annotations generated by the two pipelines. The tool compares the > two outputs for each file and reports any differences in the > annotations (MedicationMention, SignSymptomMention, > ProcedureMention, AnatomicalSiteMention, and > DiseaseDisorderMention) between the two output sets. The tool > reports the number of 'matches' and 'misses' between each annotation set. > A 'match' > > > is > > > defined as the presence of an identified source text interval with > its associated CUI appearing in both annotation sets. A 'miss' is > defined as the presence of an identified source text interval and > its associated CUI in one annotation set, but no matching identified > source text interval > > > and > > > CUI in the other. The tool also reports the total number of > annotations (source text intervals with associated CUIs) reported in > each annotation set. The compare tool is in our GitHub repository at > https://github.com/perfectsearch/cTAKES-compare > > > > > --089e0122ebd2f78763050a95b15a--