lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eks Dev (JIRA)" <>
Subject [jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery
Date Fri, 12 Feb 2010 09:32:28 GMT


Eks Dev commented on LUCENE-2089:

...Aaron i think generation may pose a problem for a full unicode alphabet...

I wouldn't discount Aron's approach so quickly! There is one *really smart*  way to aproach
generation of the distance negborhood. Have a look at FastSS  The
trick is to delete, not to genarate variations over complete  alphabet! They call it "deletion
negborhood".  Also, generates much less variation Terms, reducing pressure on binary search
in TermDict!

You do not get all these goodies from Weighted  distance implementation, but the solution
is much simpler. Would work similary to the  current spellchecker (just lookup on "variations"),
only  faster. They have even some exemple code to see how they generate "deletions" (

but the more intelligent stuff you speak of could be really cool esp. for spellchecking, sure
you dont want to rewrite our spellchecker?

btw its not clear to me yet, could you implement that stuff on top of "ghetto DFA" (the sorted
terms dict we have now) or is something more sophisticated needed? its a lot easier to write
this stuff now with the flex MTQ apis 

I really  would love to, but I was paid before to work on this. 

I guess  "gheto dfa" would not work, at least not fast enough (I didn't think about it really).
Practically you would need to know which characters extend current character in you dictionary,
or in DFA parlance, all outgoing transitions from the current state. "gheto dfa" cannot do
it efficiently?

What would be an idea with flex is to implement this stuff with an in memory trie (full trie
or TST), befor jumping into noisy channel (this is easy to add later) and persistent trie-dictionary.
 The traversal part is identical,  and  would make a nice contrib with a usefull use case
as the majority of folks have  enogh memory to slurp complete termDict into memory... Would
serve as a proof of concept for flex and fuzzyQ,  help you understand the magic of calculating
edit distance against Trie structures. Once you have trie structure, the sky is the limit,
prefix, regex... If I remeber corectly, there were some trie implmentations floating around,
with it you need just one extra traversal method to find all terms at distance N. You can
have a look at "" TST implmentation, class TernarySearchTrie.matchAlmost(...)
methods. Just for an ilustration what is going there, it is simple recursive traversal of
all terms at max distance of N.
Later we could tweak memory demand, switch to some more compact trie... and at the and add
weighted distance and convince Mike to make blasing fast persisten trie :)... in meantime,
the folks with enogh memory would have really really fast fuzzy, prefix... better distance...

So the theory :) I hope you find these comments usful, even without patches


> explore using automaton for fuzzyquery
> --------------------------------------
>                 Key: LUCENE-2089
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Wish
>          Components: Search
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>         Attachments: LUCENE-2089.patch, Moman-0.2.1.tar.gz,
> Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is itching
to write that nasty algorithm)
> we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
> * up front, calculate the maximum required K edits needed to match the users supplied
float threshold.
> * for at least small common E up to some max K (1,2,3, etc) we should create a DFA for
each E. 
> if the required E is above our supported max, we use "dumb mode" at first (no seeking,
no DFA, just brute force like now).
> As the pq fills, we swap progressively lower DFAs into the enum, based upon the lowest
score in the pq.
> This should work well on avg, at high E, you will typically fill the pq very quickly
since you will match many terms. 
> This not only provides a mechanism to switch to more efficient DFAs during enumeration,
but also to switch from "dumb mode" to "smart mode".
> i modified my wildcard benchmark to generate random fuzzy queries.
> * Pattern: 7N stands for NNNNNNN, etc.
> * AvgMS_DFA: this is the time spent creating the automaton (constructor)
> ||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
> |7N|10|64.0|4155.9|38.6|20.3|
> |14N|10|0.0|2511.6|46.0|37.9|	
> |28N|10|0.0|2506.3|93.0|86.6|
> |56N|10|0.0|2524.5|304.4|298.5|
> as you can see, this prototype is no good yet, because it creates the DFA in a slow way.
right now it creates an NFA, and all this wasted time is in NFA->DFA conversion.
> So, for a very long string, it just gets worse and worse. This has nothing to do with
lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), there is no problem
> instead we should just build a DFA to begin with, maybe with this paper:
> we can precompute the tables with that algorithm up to some reasonable K, and then I
think we are ok.
> the paper references using for linear minimization,
if someone wants to implement this they should not worry about minimization.
> in fact, we need to at some point determine if AutomatonQuery should even minimize FSM's
at all, or if it is simply enough for them to be deterministic with no transitions to dead
states. (The only code that actually assumes minimal DFA is the "Dumb" vs "Smart" heuristic
and this can be rewritten as a summation easily). we need to benchmark really complex DFAs
(i.e. write a regex benchmark) to figure out if minimization is even helping right now.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message