lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Busch (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-969) Optimize the core tokenizers/analyzers & deprecate Token.termText
Date Tue, 31 Jul 2007 19:56:53 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12516792
] 

Michael Busch commented on LUCENE-969:
--------------------------------------

> This could then slow down applications that still need to make a 
> new Token instance for every term in their documents because now 
> 2 class instances would be created for every token.

Yes that's true. I was thinking that in the new optimized way, where
people reuse the same Token and char[] instance, this wouldn't harm
since TermBuffer would basically just be a wrapper around a char 
array. But you are right, this would be an overhead in apps that 
can't reuse the Tokens.

> if we add our own new class here that's another
> Lucene-specific way to represent text content that people will have to
> learn.

Agree. I was just thinking that the CharSequence approach would reduce the 
number of setters and constructors, but you're right, we're going to remove 
the ones that take Strings anyway in a future version. 

OK, the API of this patch looks good to me! +1
Thanks for your detailed answer!


> Optimize the core tokenizers/analyzers & deprecate Token.termText
> -----------------------------------------------------------------
>
>                 Key: LUCENE-969
>                 URL: https://issues.apache.org/jira/browse/LUCENE-969
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>    Affects Versions: 2.3
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.3
>
>         Attachments: LUCENE-969.patch
>
>
> There is some "low hanging fruit" for optimizing the core tokenizers
> and analyzers:
>   - Re-use a single Token instance during indexing instead of creating
>     a new one for every term.  To do this, I added a new method "Token
>     next(Token result)" (Doron's suggestion) which means TokenStream
>     may use the "Token result" as the returned Token, but is not
>     required to (ie, can still return an entirely different Token if
>     that is more convenient).  I added default implementations for
>     both next() methods in TokenStream.java so that a TokenStream can
>     choose to implement only one of the next() methods.
>   - Use "char[] termBuffer" in Token instead of the "String
>     termText".
>     Token now maintains a char[] termBuffer for holding the term's
>     text.  Tokenizers & filters should retrieve this buffer and
>     directly alter it to put the term text in or change the term
>     text.
>     I only deprecated the termText() method.  I still allow the ctors
>     that pass in String termText, as well as setTermText(String), but
>     added a NOTE about performance cost of using these methods.  I
>     think it's OK to keep these as convenience methods?
>     After the next release, when we can remove the deprecated API, we
>     should clean up Token.java to no longer maintain "either String or
>     char[]" (and the initTermBuffer() private method) and always use
>     the char[] termBuffer instead.
>   - Re-use TokenStream instances across Fields & Documents instead of
>     creating a new one for each doc.  To do this I added an optional
>     "reusableTokenStream(...)" to Analyzer which just defaults to
>     calling tokenStream(...), and then I implemented this for the core
>     analyzers.
> I'm using the patch from LUCENE-967 for benchmarking just
> tokenization.
> The changes above give 21% speedup (742 seconds -> 585 seconds) for
> LowerCaseTokenizer -> StopFilter -> PorterStemFilter chain, tokenizing
> all of Wikipedia, on JDK 1.6 -server -Xmx1024M, Debian Linux, RAID 5
> IO system (best of 2 runs).
> If I pre-break Wikipedia docs into 100 token docs then it's 37% faster
> (1236 sec -> 774 sec), I think because of re-using TokenStreams across
> docs.
> I'm just running with this alg and recording the elapsed time:
>   analyzer=org.apache.lucene.analysis.LowercaseStopPorterAnalyzer
>   doc.tokenize.log.step=50000
>   docs.file=/lucene/wikifull.txt
>   doc.maker=org.apache.lucene.benchmark.byTask.feeds.LineDocMaker
>   doc.tokenized=true
>   doc.maker.forever=false
>   {ReadTokens > : *
> See this thread for discussion leading up to this:
>   http://www.gossamer-threads.com/lists/lucene/java-dev/51283
> I also fixed Token.toString() to work correctly when termBuffer is
> used (and added unit test).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message