lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <>
Subject [jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery
Date Sat, 27 Feb 2010 20:51:06 GMT


Robert Muir commented on LUCENE-2089:

Mike, this is awesome. I ran benchmarks: we are just as fast as before (with only Lev1 and
Lev2 enabled), but with smaller generated code.
When i turn on Lev3, it speeds up the worst-case ones (no prefix, pq=1024, fuzzy of n=3, n=4),
but slows down some of the "better-case" n=3/n=4 cases where there is a prefix or PQ.

I think this is because the benchmark is contrived, but realistically n=3 (with seeking!)
should be a win for users. A less-contrived benchmark (a 'typical' massive term dictionary)
would help for tuning.

separately, I think we can add heuristics: e.g. for n > 3 WITH a prefix, use the DFA in
"linear mode" until you drop to n=2, as you already have a nice prefix anyway, stuff like
that. But if the user doesn't supply a prefix, i think seeking is always a win.

Here are the results anyway: I ran it many times and its consistent (obviously differences
of just a few MS are not significant). I bolded the ones i think illustrate the differences
I am talking about.

Its cool to be at the point where we are actually able to measure these kinds of tradeoffs!

{{Minimum Sim = 0.73f (edit distance of 1)}} 
||Prefix Length||PQ Size||Avg MS (flex trunk)||Avg MS (1,2)||Avg MS (1,2,3)||

{{Minimum Sim = 0.58f (edit distance of 2)}}
||Prefix Length||PQ Size||Avg MS (flex trunk)||Avg MS (1,2)||Avg MS (1,2,3)||

{{Minimum Sim = 0.43f (edit distance of 3)}}
||Prefix Length||PQ Size||Avg MS (flex trunk)||Avg MS (1,2)||Avg MS (1,2,3)||

{{Minimum Sim = 0.29f (edit distance of 4)}}
||Prefix Length||PQ Size||Avg MS (flex trunk)||Avg MS (1,2)||Avg MS (1,2,3)||

> explore using automaton for fuzzyquery
> --------------------------------------
>                 Key: LUCENE-2089
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: Flex Branch
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>             Fix For: Flex Branch
>         Attachments:,,,,,,,,,,, LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch,
LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch,
LUCENE-2089_concat.patch, Moman-0.2.1.tar.gz,
> we can optimize fuzzyquery by using AutomatonTermsEnum. The idea is to speed up the core
FuzzyQuery in similar fashion to Wildcard and Regex speedups, maintaining all backwards compatibility.
> The advantages are:
> * we can seek to terms that are useful, instead of brute-forcing the entire terms dict
> * we can determine matches faster, as true/false from a DFA is array lookup, don't even
need to run levenshtein.
> We build Levenshtein DFAs in linear time with respect to the length of the word:
> To implement support for 'prefix' length, we simply concatenate two DFAs, which doesn't
require us to do NFA->DFA conversion, as the prefix portion is a singleton. the concatenation
is also constant time with respect to the size of the fuzzy DFA, it only need examine its
start state.
> with this algorithm, parametric tables are precomputed so that DFAs can be constructed
very quickly.
> if the required number of edits is too large (we don't have a table for it), we use "dumb
mode" at first (no seeking, no DFA, just brute force like now).
> As the priority queue fills up during enumeration, the similarity score required to be
a competitive term increases, so, the enum gets faster and faster as this happens. This is
because terms in core FuzzyQuery are sorted by boost value, then by term (in lexicographic
> For a large term dictionary with a low minimal similarity, you will fill the pq very
quickly since you will match many terms. 
> This not only provides a mechanism to switch to more efficient DFAs (edit distance of
2 -> edit distance of 1 -> edit distance of 0) during enumeration, but also to switch
from "dumb mode" to "smart mode".
> With this design, we can add more DFAs at any time by adding additional tables. The tradeoff
is the tables get rather large, so for very high K, we would start to increase the size of
Lucene's jar file. The idea is we don't have include large tables for very high K, by using
the 'competitive boost' attribute of the priority queue.
> For more information, see

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message