lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eks Dev (JIRA)" <>
Subject [jira] Commented: (LUCENE-1606) Automaton Query/Filter (scalable regex)
Date Tue, 21 Apr 2009 21:44:47 GMT


Eks Dev commented on LUCENE-1606:

in order for Lev. Automata to work, you need to have the complete dictionary as DFA. Once
you have dictionary as DFA (or any sort of trie), computing simple regex-s or simple fixed
or weighted Levenshtein distance becomes a snap. Levenshtein-Automata is particularity fast
at it, much simpler and only slightly slower method (one pager code)   "K.Oflazer"

As said, you cannot really walk  current term dictionary as automata/trie (or you have an
idea on how to do that?). I guess there is enough application where stoing complete Term dictionary
into RAM-DFA is not a problem. Even making some smart (heavily cached) persistent trie/DFA
 should not be all that complex.

Or you intended just to iterate all terms, and compute distance faster  "break LD Matrix computation
as soon as you see you hit the boundary"? But this requires iteration over all terms?  

I have done something similar, in memory, but unfortunately someone else paid me for this
and is not willing to share... 

> Automaton Query/Filter (scalable regex)
> ---------------------------------------
>                 Key: LUCENE-1606
>                 URL:
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: contrib/*
>            Reporter: Robert Muir
>            Priority: Minor
>             Fix For: 2.9
>         Attachments: automaton.patch, automatonMultiQuery.patch, automatonmultiqueryfuzzy.patch,
automatonMultiQuerySmart.patch, automatonWithWildCard.patch, automatonWithWildCard2.patch
> Attached is a patch for an AutomatonQuery/Filter (name can change if its not suitable).
> Whereas the out-of-box contrib RegexQuery is nice, I have some very large indexes (100M+
unique tokens) where queries are quite slow, 2 minutes, etc. Additionally all of the existing
RegexQuery implementations in Lucene are really slow if there is no constant prefix. This
implementation does not depend upon constant prefix, and runs the same query in 640ms.
> Some use cases I envision:
>  1. lexicography/etc on large text corpora
>  2. looking for things such as urls where the prefix is not constant (http:// or ftp://)
> The Filter uses the BRICS package ( to convert regular
expressions into a DFA. Then, the filter "enumerates" terms in a special way, by using the
underlying state machine. Here is my short description from the comments:
>      The algorithm here is pretty basic. Enumerate terms but instead of a binary accept/reject
>      1. Look at the portion that is OK (did not enter a reject state in the DFA)
>      2. Generate the next possible String and seek to that.
> the Query simply wraps the filter with ConstantScoreQuery.
> I did not include the automaton.jar inside the patch but it can be downloaded from
and is BSD-licensed.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message