lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <>
Subject [jira] Commented: (LUCENE-2090) convert automaton to char[] based processing and TermRef / TermsEnum api
Date Tue, 24 Nov 2009 18:36:39 GMT


Robert Muir commented on LUCENE-2090:

I guess now you have me starting to think about byte[] contains()
Because really the real worst case, which I bet a lot of users do, are not things like *foobar
but instead *foobar\* !
in UTF-8 you can do such things safely, I would have to sucker out the "longest common constant
sequence" out of a DFA.
This might be more generally applicable.

commonSuffix is easy... at least it makes progress for now, even slightly later in trunk.

this could be a later improvement.

> convert automaton to char[] based processing and TermRef / TermsEnum api
> ------------------------------------------------------------------------
>                 Key: LUCENE-2090
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>            Reporter: Robert Muir
>            Priority: Minor
>             Fix For: 3.1
> The automaton processing is currently done with String, mostly because TermEnum is based
on String.
> it is easy to change the processing to work with char[], since behind the scenes this
is used anyway.
> in general I think we should make sure char[] based processing is exposed in the automaton
pkg anyway, for things like pattern-based tokenizers and such.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message