lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <j...@apache.org>
Subject [jira] Updated: (LUCENE-2089) explore using automaton for fuzzyquery
Date Fri, 19 Feb 2010 20:58:28 GMT

     [ https://issues.apache.org/jira/browse/LUCENE-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Robert Muir updated LUCENE-2089:
--------------------------------

    Description: 
we can optimize fuzzyquery by using AutomatonTermsEnum. The idea is to speed up the core FuzzyQuery
in similar fashion to Wildcard and Regex speedups, maintaining all backwards compatibility.

The advantages are:
* we can seek to terms that are useful, instead of brute-forcing the entire terms dict
* we can determine matches faster, as true/false from a DFA is array lookup, don't even need
to run levenshtein.

We build Levenshtein DFAs in linear time with respect to the length of the word: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652

To implement support for 'prefix' length, we simply concatenate two DFAs, which doesn't require
us to do NFA->DFA conversion, as the prefix portion is a singleton. the concatenation is
also constant time with respect to the size of the fuzzy DFA, it only need examine its start
state.

with this algorithm, parametric tables are precomputed so that DFAs can be constructed very
quickly.
if the required number of edits is too large (we don't have a table for it), we use "dumb
mode" at first (no seeking, no DFA, just brute force like now).

As the priority queue fills up during enumeration, the similarity score required to be a competitive
term increases, so, the enum gets faster and faster as this happens. This is because terms
in core FuzzyQuery are sorted by boost value, then by term (in lexicographic order).

For a large term dictionary with a low minimal similarity, you will fill the pq very quickly
since you will match many terms. 
This not only provides a mechanism to switch to more efficient DFAs (edit distance of 2 ->
edit distance of 1 -> edit distance of 0) during enumeration, but also to switch from "dumb
mode" to "smart mode".

With this design, we can add more DFAs at any time by adding additional tables. The tradeoff
is the tables get rather large, so for very high K, we would start to increase the size of
Lucene's jar file. The idea is we don't have include large tables for very high K, by using
the 'competitive boost' attribute of the priority queue.

For more information, see http://en.wikipedia.org/wiki/Levenshtein_automaton

  was:
Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is itching to write
that nasty algorithm)

we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
* up front, calculate the maximum required K edits needed to match the users supplied float
threshold.
* for at least small common E up to some max K (1,2,3, etc) we should create a DFA for each
E. 

if the required E is above our supported max, we use "dumb mode" at first (no seeking, no
DFA, just brute force like now).
As the pq fills, we swap progressively lower DFAs into the enum, based upon the lowest score
in the pq.
This should work well on avg, at high E, you will typically fill the pq very quickly since
you will match many terms. 
This not only provides a mechanism to switch to more efficient DFAs during enumeration, but
also to switch from "dumb mode" to "smart mode".

i modified my wildcard benchmark to generate random fuzzy queries.
* Pattern: 7N stands for NNNNNNN, etc.
* AvgMS_DFA: this is the time spent creating the automaton (constructor)

||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
|7N|10|64.0|4155.9|38.6|20.3|
|14N|10|0.0|2511.6|46.0|37.9|	
|28N|10|0.0|2506.3|93.0|86.6|
|56N|10|0.0|2524.5|304.4|298.5|

as you can see, this prototype is no good yet, because it creates the DFA in a slow way. right
now it creates an NFA, and all this wasted time is in NFA->DFA conversion.
So, for a very long string, it just gets worse and worse. This has nothing to do with lucene,
and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), there is no problem there.

instead we should just build a DFA to begin with, maybe with this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
we can precompute the tables with that algorithm up to some reasonable K, and then I think
we are ok.

the paper references using http://portal.acm.org/citation.cfm?id=135907 for linear minimization,
if someone wants to implement this they should not worry about minimization.
in fact, we need to at some point determine if AutomatonQuery should even minimize FSM's at
all, or if it is simply enough for them to be deterministic with no transitions to dead states.
(The only code that actually assumes minimal DFA is the "Dumb" vs "Smart" heuristic and this
can be rewritten as a summation easily). we need to benchmark really complex DFAs (i.e. write
a regex benchmark) to figure out if minimization is even helping right now.




edit the description, to hopefully be simpler.

> explore using automaton for fuzzyquery
> --------------------------------------
>
>                 Key: LUCENE-2089
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2089
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: Flex Branch
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>             Fix For: Flex Branch
>
>         Attachments: ContrivedFuzzyBenchmark.java, LUCENE-2089.patch, LUCENE-2089.patch,
LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089.patch, LUCENE-2089_concat.patch, Moman-0.2.1.tar.gz,
TestFuzzy.java
>
>
> we can optimize fuzzyquery by using AutomatonTermsEnum. The idea is to speed up the core
FuzzyQuery in similar fashion to Wildcard and Regex speedups, maintaining all backwards compatibility.
> The advantages are:
> * we can seek to terms that are useful, instead of brute-forcing the entire terms dict
> * we can determine matches faster, as true/false from a DFA is array lookup, don't even
need to run levenshtein.
> We build Levenshtein DFAs in linear time with respect to the length of the word: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
> To implement support for 'prefix' length, we simply concatenate two DFAs, which doesn't
require us to do NFA->DFA conversion, as the prefix portion is a singleton. the concatenation
is also constant time with respect to the size of the fuzzy DFA, it only need examine its
start state.
> with this algorithm, parametric tables are precomputed so that DFAs can be constructed
very quickly.
> if the required number of edits is too large (we don't have a table for it), we use "dumb
mode" at first (no seeking, no DFA, just brute force like now).
> As the priority queue fills up during enumeration, the similarity score required to be
a competitive term increases, so, the enum gets faster and faster as this happens. This is
because terms in core FuzzyQuery are sorted by boost value, then by term (in lexicographic
order).
> For a large term dictionary with a low minimal similarity, you will fill the pq very
quickly since you will match many terms. 
> This not only provides a mechanism to switch to more efficient DFAs (edit distance of
2 -> edit distance of 1 -> edit distance of 0) during enumeration, but also to switch
from "dumb mode" to "smart mode".
> With this design, we can add more DFAs at any time by adding additional tables. The tradeoff
is the tables get rather large, so for very high K, we would start to increase the size of
Lucene's jar file. The idea is we don't have include large tables for very high K, by using
the 'competitive boost' attribute of the priority queue.
> For more information, see http://en.wikipedia.org/wiki/Levenshtein_automaton

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message