lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fuad Efendi (JIRA)" <>
Subject [jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery
Date Thu, 11 Feb 2010 02:12:29 GMT


Fuad Efendi commented on LUCENE-2089:

Levenshtein Distance is good for "Spelling Corrections" use case (even terminology is the
same: insert, delete, replace...) 
But is is not good for more generic similarity: distance between RunAutomation and AutomationRun
is huge (6!). But it is two-word combination indeed,and I don't know good one-(human)-word
use case where Levenshtein Distance is not good (or natural). From other viewpoint, I can't
see any use case where StrikeAMatch (counts of 2-char similarities) is bad, although it is
not "spelling corrections". And, from third viewpoint, if we totally forget that it is indeed
human-generated-input and implement Levenshtein distance on raw bitsets instead of unicode
characters (end-user clicks on keyboard)... we will get totally non-acceptable results...

I believe such "distance" algos were initially designed many years ago, before Internet (and
Search), to allow auto-recovery during data transmission (first astronauts...) - but autorecovery
was based on fact that (acceptable) mistaken code has one and only one closest match from
the dictionary; so it was extremely fast (50 years ago). And now, we are using old algo designed
for completely different use case (fixed-size bitset transmissions) for "spelling corrections"...

What if we will focus on a keyboard (101 keys?) instead of Unicode... "spelling corrections"...

20ms is not good, it is 50TPS only (on a single core)...

> explore using automaton for fuzzyquery
> --------------------------------------
>                 Key: LUCENE-2089
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Wish
>          Components: Search
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>         Attachments: LUCENE-2089.patch, Moman-0.2.1.tar.gz,
> Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is itching
to write that nasty algorithm)
> we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
> * up front, calculate the maximum required K edits needed to match the users supplied
float threshold.
> * for at least small common E up to some max K (1,2,3, etc) we should create a DFA for
each E. 
> if the required E is above our supported max, we use "dumb mode" at first (no seeking,
no DFA, just brute force like now).
> As the pq fills, we swap progressively lower DFAs into the enum, based upon the lowest
score in the pq.
> This should work well on avg, at high E, you will typically fill the pq very quickly
since you will match many terms. 
> This not only provides a mechanism to switch to more efficient DFAs during enumeration,
but also to switch from "dumb mode" to "smart mode".
> i modified my wildcard benchmark to generate random fuzzy queries.
> * Pattern: 7N stands for NNNNNNN, etc.
> * AvgMS_DFA: this is the time spent creating the automaton (constructor)
> ||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
> |7N|10|64.0|4155.9|38.6|20.3|
> |14N|10|0.0|2511.6|46.0|37.9|	
> |28N|10|0.0|2506.3|93.0|86.6|
> |56N|10|0.0|2524.5|304.4|298.5|
> as you can see, this prototype is no good yet, because it creates the DFA in a slow way.
right now it creates an NFA, and all this wasted time is in NFA->DFA conversion.
> So, for a very long string, it just gets worse and worse. This has nothing to do with
lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), there is no problem
> instead we should just build a DFA to begin with, maybe with this paper:
> we can precompute the tables with that algorithm up to some reasonable K, and then I
think we are ok.
> the paper references using for linear minimization,
if someone wants to implement this they should not worry about minimization.
> in fact, we need to at some point determine if AutomatonQuery should even minimize FSM's
at all, or if it is simply enough for them to be deterministic with no transitions to dead
states. (The only code that actually assumes minimal DFA is the "Dumb" vs "Smart" heuristic
and this can be rewritten as a summation easily). we need to benchmark really complex DFAs
(i.e. write a regex benchmark) to figure out if minimization is even helping right now.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message