lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (LUCENE-1606) Automaton Query/Filter (scalable regex)
Date Thu, 22 Apr 2010 19:35:50 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12785034#action_12785034
] 

Robert Muir edited comment on LUCENE-1606 at 4/22/10 3:35 PM:
--------------------------------------------------------------

edit: edit out my chinese chars and replaced with <chineseCharOutsideBMP> as there are
some problems indexing this comment.

btw the unicode complexity i mention here is not just me being anal, its an impedence mismatch
between the automaton library using UTF-16 code unit representation and the new flex api requiring
valid UTF-8. 

I am not trying to introduce complexity for no reason, here is an example:

{code}
RegExp re = new RegExp("(<chineseCharOutsideBMP>|<chineseCharOutsideBMP>)");
System.out.println(re.toAutomaton());
{code}
{noformat}
initial state: 0
state 0 [reject]:
  \ud866 -> 2
state 1 [accept]:
state 2 [reject]:
  \udf05-\udf06 -> 1
{noformat}

as you can see, the automaton library handles these characters correctly, but as code units.
so its important not to seek to invalid locations when walking thru the DFA, because these
will be replaced by U+FFFD, 
and terms could be skipped, or we go backwards, creating a loop.
Thats why i spent so much time on this.



      was (Author: rcmuir):
    btw the unicode complexity i mention here is not just me being anal, its an impedence
mismatch between the automaton library using UTF-16 code unit representation and the new flex
api requiring valid UTF-8. 

I am not trying to introduce complexity for no reason, here is an example:

{code}
RegExp re = new RegExp("(𩬅|𩬆)");
System.out.println(re.toAutomaton());
{code}
{noformat}
initial state: 0
state 0 [reject]:
  \ud866 -> 2
state 1 [accept]:
state 2 [reject]:
  \udf05-\udf06 -> 1
{noformat}

as you can see, the automaton library handles these characters correctly, but as code units.
so its important not to seek to invalid locations when walking thru the DFA, because these
will be replaced by U+FFFD, 
and terms could be skipped, or we go backwards, creating a loop.
Thats why i spent so much time on this.


  
> Automaton Query/Filter (scalable regex)
> ---------------------------------------
>
>                 Key: LUCENE-1606
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1606
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Search
>            Reporter: Robert Muir
>            Assignee: Robert Muir
>            Priority: Minor
>             Fix For: Flex Branch, 3.1
>
>         Attachments: automaton.patch, automatonMultiQuery.patch, automatonmultiqueryfuzzy.patch,
automatonMultiQuerySmart.patch, automatonWithWildCard.patch, automatonWithWildCard2.patch,
BenchWildcard.java, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch,
LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch,
LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch, LUCENE-1606-flex.patch,
LUCENE-1606-flex.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
LUCENE-1606.patch, LUCENE-1606_nodep.patch
>
>
> Attached is a patch for an AutomatonQuery/Filter (name can change if its not suitable).
> Whereas the out-of-box contrib RegexQuery is nice, I have some very large indexes (100M+
unique tokens) where queries are quite slow, 2 minutes, etc. Additionally all of the existing
RegexQuery implementations in Lucene are really slow if there is no constant prefix. This
implementation does not depend upon constant prefix, and runs the same query in 640ms.
> Some use cases I envision:
>  1. lexicography/etc on large text corpora
>  2. looking for things such as urls where the prefix is not constant (http:// or ftp://)
> The Filter uses the BRICS package (http://www.brics.dk/automaton/) to convert regular
expressions into a DFA. Then, the filter "enumerates" terms in a special way, by using the
underlying state machine. Here is my short description from the comments:
>      The algorithm here is pretty basic. Enumerate terms but instead of a binary accept/reject
do:
>       
>      1. Look at the portion that is OK (did not enter a reject state in the DFA)
>      2. Generate the next possible String and seek to that.
> the Query simply wraps the filter with ConstantScoreQuery.
> I did not include the automaton.jar inside the patch but it can be downloaded from http://www.brics.dk/automaton/
and is BSD-licensed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message