lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eks Dev (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-2089) explore using automaton for fuzzyquery
Date Thu, 11 Feb 2010 23:48:27 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12832741#action_12832741
] 

Eks Dev commented on LUCENE-2089:
---------------------------------

{quote}
I assume you mean by weighted edit distance that the transitions in the state machine would
have costs?
{quote}

Yes, kind of, not embedded in the trie, just defined externally.

What I am talking about is a part of the noisy channel approach, modeling only channel distribution.
Have a look at the http://norvig.com/spell-correct.html for basic theory. I am suggesting
almost the same,  just applied at character level and without language model part. It is rather
easy once you have your dictionary in some sort of tree structure.


You guide your trie traversal over the trie by  iterating on each char in your search term
accumulating   log probabilities of single transformations (recycling prefix part). When you
hit a leaf insert into PriorityQueue of appropriate depth. What I mean by "probabilities of
single transformations" are defined as:
insertion(character a)//map char->log probability (think of it as kind of "cost of inserting
this particular character)
deletion(character)//map char->log probability...
transposition(char a, char b)
replacement(char a, char b)//2D matrix <char,char>->probability (cost)
if you wish , you could even add some positional information, boosting match on start/end
of the string

I avoided tricky mechanicson traversal, insertion, deletion, but on trie you can do it by
following different paths... 

the only good implementation (in memory) around there I know of is in LingPipe spell checker
(they implement full Noisy Channel, with Language model  driving traversal)... has huge educational
value, Bob is really great at explaining things. The code itself is proprietary. 
I would suggest you to peek into this code to see this 2-Minute rumbling I wrote here properly
explained :) Just ignore the language model part and assume you have NULL language model (all
chars in language are equally probable) , doing full traversal over the trie. 

{quote}
If this is the case couldn't we even define standard levenshtein very easily (instead of nasty
math), and would the beam search technique enumerate efficiently for us?
{quote}
 Standard Lev. is trivially configured once you have this, it is just setting all these costs
to 1 (delete, insert... in log domain)... But who would use standard distance with such a
beast, reducing impact of inserting/deleting silent "h" as in "Thomas" "Tomas"... 
Enumeration is trie traversal, practically calculating distance against all terms at the same
time and collectiong N best along the way. The place where you save your time is recycling
prefix part in this calculation. Enumeration is optimal as this trie there contains only the
terms from termDict, you are not trying all possible alphabet characters and you can implement
"early path abandoning" easily ether by cost (log probability) or/and by limiting the number
 of successive insertions

If interested in really in depth things, look at http://www.amazon.com/Algorithms-Strings-Trees-Sequences-Computational/dp/0521585198
Great book, (another great tip from  Bob@LingPipe). A bit strange with terminology (at least
to me), but once you get used to it, is really worth the time you spend trying to grasp it.




 

> explore using automaton for fuzzyquery
> --------------------------------------
>
>                 Key: LUCENE-2089
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2089
>             Project: Lucene - Java
>          Issue Type: Wish
>          Components: Search
>            Reporter: Robert Muir
>            Assignee: Mark Miller
>            Priority: Minor
>         Attachments: LUCENE-2089.patch, Moman-0.2.1.tar.gz, TestFuzzy.java
>
>
> Mark brought this up on LUCENE-1606 (i will assign this to him, I know he is itching
to write that nasty algorithm)
> we can optimize fuzzyquery by using AutomatonTermsEnum, here is my idea
> * up front, calculate the maximum required K edits needed to match the users supplied
float threshold.
> * for at least small common E up to some max K (1,2,3, etc) we should create a DFA for
each E. 
> if the required E is above our supported max, we use "dumb mode" at first (no seeking,
no DFA, just brute force like now).
> As the pq fills, we swap progressively lower DFAs into the enum, based upon the lowest
score in the pq.
> This should work well on avg, at high E, you will typically fill the pq very quickly
since you will match many terms. 
> This not only provides a mechanism to switch to more efficient DFAs during enumeration,
but also to switch from "dumb mode" to "smart mode".
> i modified my wildcard benchmark to generate random fuzzy queries.
> * Pattern: 7N stands for NNNNNNN, etc.
> * AvgMS_DFA: this is the time spent creating the automaton (constructor)
> ||Pattern||Iter||AvgHits||AvgMS(old)||AvgMS (new,total)||AvgMS_DFA||
> |7N|10|64.0|4155.9|38.6|20.3|
> |14N|10|0.0|2511.6|46.0|37.9|	
> |28N|10|0.0|2506.3|93.0|86.6|
> |56N|10|0.0|2524.5|304.4|298.5|
> as you can see, this prototype is no good yet, because it creates the DFA in a slow way.
right now it creates an NFA, and all this wasted time is in NFA->DFA conversion.
> So, for a very long string, it just gets worse and worse. This has nothing to do with
lucene, and here you can see, the TermEnum is fast (AvgMS - AvgMS_DFA), there is no problem
there.
> instead we should just build a DFA to begin with, maybe with this paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652
> we can precompute the tables with that algorithm up to some reasonable K, and then I
think we are ok.
> the paper references using http://portal.acm.org/citation.cfm?id=135907 for linear minimization,
if someone wants to implement this they should not worry about minimization.
> in fact, we need to at some point determine if AutomatonQuery should even minimize FSM's
at all, or if it is simply enough for them to be deterministic with no transitions to dead
states. (The only code that actually assumes minimal DFA is the "Dumb" vs "Smart" heuristic
and this can be rewritten as a summation easily). we need to benchmark really complex DFAs
(i.e. write a regex benchmark) to figure out if minimization is even helping right now.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message