lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery
Date Sun, 22 Nov 2009 21:49:39 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12781230#action_12781230
] 

Robert Muir commented on LUCENE-2089:
-------------------------------------

Mark, from his page it seemed like 0.2 was the version with the generalized edit distance?

{noformat}
Moman 0.2 is out!
(2005-07-29) This version add the possibility to use a Levenshtein distance greater than 1.

Before, the transitions tables were static, now we build them. 
It means that in theory, you could ask for a Levenshtein distance of 27! 
Well, if you have a week ahead of you... 
{noformat}

> explore using automaton for fuzzyquery
> --------------------------------------
>
>                 Key: LUCENE-2089
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2089
>             Project: Lucene - Java
>          Issue Type: Wish
>          Components: Search
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>         Attachments: Moman-0.1.tar.gz
>
>
> Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is itching
to write that nasty algorithm)
> we can optimize fuzzyquery by using AutomatonTermEnum, here is my idea
> * up front, calculate the maximum required K edits needed to match the users supplied
float threshold.
> * for at least common K (1,2,3, etc) we should use automatontermenum. if its outside
of that, maybe use the existing slow logic. At high K, it will seek too much to be helpful
anyway.
> i modified my wildcard benchmark to generate random fuzzy queries.
> * Pattern: 7N stands for NNNNNNN, etc.
> * AvgMS_DFA: this is the time spent creating the automaton (constructor)
> ||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
> |7N|10|64.0|4155.9|38.6|20.3|
> |14N|10|0.0|2511.6|46.0|37.9|	
> |28N|10|0.0|2506.3|93.0|86.6|
> |56N|10|0.0|2524.5|304.4|298.5|
> as you can see, this prototype is no good yet, because it creates the DFA in a slow way.
right now it creates an NFA, and all this wasted time is in NFA->DFA conversion.
> So, for a very long string, it just gets worse and worse. This has nothing to do with
lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), there is no problem
there.
> instead we should just build a DFA to begin with, maybe with this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
> we can precompute the tables with that algorithm up to some reasonable K, and then I
think we are ok.
> the paper references using http://portal.acm.org/citation.cfm?id=135907 for linear minimization,
if someone wants to implement this they should not worry about minimization.
> in fact, we need to at some point determine if AutomatonQuery should even minimize FSM's
at all, or if it is simply enough for them to be deterministic with no transitions to dead
states. (The only code that actually assumes minimal DFA is the "Dumb" vs "Smart" heuristic
and this can be rewritten as a summation easily). we need to benchmark really complex DFAs
(i.e. write a regex benchmark) to figure out if minimization is even helping right now.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message