lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tommaso Teofili (Issue Comment Edited) (JIRA)" <j...@apache.org>
Subject [jira] [Issue Comment Edited] (LUCENE-3731) Create a analysis/uima module for UIMA based tokenizers/analyzers
Date Thu, 16 Feb 2012 15:40:59 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13209439#comment-13209439
] 

Tommaso Teofili edited comment on LUCENE-3731 at 2/16/12 3:40 PM:
------------------------------------------------------------------

bq. Because Tokenizer.close() is misleading/confusing, the instance is still reused after

this for subsequent documents.

When I call close() it looks the correct way one could reuse that Tokenizer instance is by
calling reset(someOtherInput) before doing anything else, so, after adding 

{code}
assert reader != null : "input has been closed, please reset it";
{code}

as first line inside the toString(Reader reader) method in BaseUIMATokenizer, I tried this
test:
{code}

  @Test
  public void testSetReaderAndClose() throws Exception {
    StringReader input = new StringReader("the big brown fox jumped on the wood");
    Tokenizer t = new UIMAAnnotationsTokenizer("/uima/AggregateSentenceAE.xml", "org.apache.uima.TokenAnnotation",
input);
    assertTokenStreamContents(t, new String[]{"the", "big", "brown", "fox", "jumped", "on",
"the", "wood"});
    t.close();
    try {
      t.incrementToken();
      fail("should've been failing as reader is not set");
    } catch (AssertionError error) {
      // ok
    }
    input = new StringReader("hi oh my");
    t = new UIMAAnnotationsTokenizer("/uima/TestAggregateSentenceAE.xml", "org.apache.lucene.uima.ts.TokenAnnotation",
input);
    assertTrue("should've been incremented ", t.incrementToken());
    t.close();
    try {
      t.incrementToken();
      fail("should've been failing as reader is not set");
    } catch (AssertionError error) {
      // ok
    }
    t.reset(new StringReader("hey what do you say"));
    assertTrue("should've been incremented ", t.incrementToken());
  }

{code}

and it looks to me it's behaving correctly.
Still working on improving it and trying to catch possible corner cases.

                
      was (Author: teofili):
    bq. Because Tokenizer.close() is misleading/confusing, the instance is still reused after

this for subsequent documents.

When I call close() it looks the correct way one could reuse that Tokenizer instance is by
calling reset(someOtherInput) before doing anything else, so, after adding 

{code}
assert reader != null : "input has been closed, please reset it";
{code}

as first line inside the toString(Reader reader) method in BaseUIMATokenizer, I tried this
test:
{code}

  @Test
  public void testSetReaderAndClose() throws Exception {
    StringReader input = new StringReader("the big brown fox jumped on the wood");
    Tokenizer t = new UIMAAnnotationsTokenizer("/uima/AggregateSentenceAE.xml", "org.apache.uima.TokenAnnotation",
input);
    assertTokenStreamContents(t, new String[]{"the", "big", "brown", "fox", "jumped", "on",
"the", "wood"});
    t.close();
    try {
      t.incrementToken();
      fail("should've been failed as reader is not set");
    } catch (AssertionError error) {
      // ok
    }
    input = new StringReader("hi oh my");
    t = new UIMAAnnotationsTokenizer("/uima/TestAggregateSentenceAE.xml", "org.apache.lucene.uima.ts.TokenAnnotation",
input);
    assertTrue("should've been incremented ", t.incrementToken());
    t.close();
    try {
      t.incrementToken();
      fail("should've been failed as reader is not set");
    } catch (AssertionError error) {
      // ok
    }
    t.reset(new StringReader("hey what do you say"));
    assertTrue("should've been incremented ", t.incrementToken());
  }

{code}

and it looks to me it's behaving correctly.
Still working on improving it and trying to catch possible corner cases.

                  
> Create a analysis/uima module for UIMA based tokenizers/analyzers
> -----------------------------------------------------------------
>
>                 Key: LUCENE-3731
>                 URL: https://issues.apache.org/jira/browse/LUCENE-3731
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: modules/analysis
>            Reporter: Tommaso Teofili
>            Assignee: Tommaso Teofili
>             Fix For: 3.6, 4.0
>
>         Attachments: LUCENE-3731.patch, LUCENE-3731_2.patch, LUCENE-3731_3.patch, LUCENE-3731_4.patch,
LUCENE-3731_rsrel.patch, LUCENE-3731_speed.patch, LUCENE-3731_speed.patch, LUCENE-3731_speed.patch
>
>
> As discussed in SOLR-3013 the UIMA Tokenizers/Analyzer should be refactored out in a
separate module (modules/analysis/uima) as they can be used in plain Lucene. Then the solr/contrib/uima
will contain only the related factories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message