lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uwe Schindler (JIRA)" <>
Subject [jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery
Date Wed, 17 Feb 2010 09:02:28 GMT


Uwe Schindler commented on LUCENE-2089:

Hi Robert,
I reviewed you latest patch and was a little bit irritated, but then everything explained
when also looking into AutomatonTermsEnum and understanding what happes. But there is still
some "code duplication" (not really duplication, but functionality duplication):

- If a constant prefix is used, the generated Automatons are using a constant prefix + a Levenshtein
Automaton (using concat)
- If you run such an automaton agains the TermIndex using the superclass, it will seek first
to the prefix term (or some term starting with the prefix), thats ok. As soon as the prefix
is no longer valid, the AutomatonTermsEnum stops processing (if running such an automaton
using the standard AutomatonTermsEnum).
- The AutomatonFuzzyTermsEnum checks if the term starts with prefix and if not it exists ENDs
(!) the automaton. The reason why this works is because nextString() in superclass returns
automatically a string starting with the prefix, but this was the main fact that irritated
- The question is now, is this extra prefix check really needed? Running the automaton against
the current term in accept would simply return no match and nextString() would stop further
processing? Or is this because the accept method should not iterate through all distances
once the prefix is not matched?

Maybe you should add some comments to the AutomatonFuzzyTermsEnum or some asserts to show
whats happening.

> explore using automaton for fuzzyquery
> --------------------------------------
>                 Key: LUCENE-2089
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: Flex Branch
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>             Fix For: Flex Branch
>         Attachments: LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch,
LUCENE-2089_concat.patch, Moman-0.2.1.tar.gz,
> Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is itching
to write that nasty algorithm)
> we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
> * up front, calculate the maximum required K edits needed to match the users supplied
float threshold.
> * for at least small common E up to some max K (1,2,3, etc) we should create a DFA for
each E. 
> if the required E is above our supported max, we use "dumb mode" at first (no seeking,
no DFA, just brute force like now).
> As the pq fills, we swap progressively lower DFAs into the enum, based upon the lowest
score in the pq.
> This should work well on avg, at high E, you will typically fill the pq very quickly
since you will match many terms. 
> This not only provides a mechanism to switch to more efficient DFAs during enumeration,
but also to switch from "dumb mode" to "smart mode".
> i modified my wildcard benchmark to generate random fuzzy queries.
> * Pattern: 7N stands for NNNNNNN, etc.
> * AvgMS_DFA: this is the time spent creating the automaton (constructor)
> ||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
> |7N|10|64.0|4155.9|38.6|20.3|
> |14N|10|0.0|2511.6|46.0|37.9|	
> |28N|10|0.0|2506.3|93.0|86.6|
> |56N|10|0.0|2524.5|304.4|298.5|
> as you can see, this prototype is no good yet, because it creates the DFA in a slow way.
right now it creates an NFA, and all this wasted time is in NFA->DFA conversion.
> So, for a very long string, it just gets worse and worse. This has nothing to do with
lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), there is no problem
> instead we should just build a DFA to begin with, maybe with this paper:
> we can precompute the tables with that algorithm up to some reasonable K, and then I
think we are ok.
> the paper references using for linear minimization,
if someone wants to implement this they should not worry about minimization.
> in fact, we need to at some point determine if AutomatonQuery should even minimize FSM's
at all, or if it is simply enough for them to be deterministic with no transitions to dead
states. (The only code that actually assumes minimal DFA is the "Dumb" vs "Smart" heuristic
and this can be rewritten as a summation easily). we need to benchmark really complex DFAs
(i.e. write a regex benchmark) to figure out if minimization is even helping right now.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message