lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joel Rosen (JIRA)" <>
Subject [jira] [Commented] (SOLR-3589) Edismax parser does not honor mm parameter if analyzer splits a token
Date Thu, 16 Aug 2012 16:20:38 GMT


Joel Rosen commented on SOLR-3589:

Sounds to me like this is an English-centric design flaw with dismax.  The point of dismax
is to intelligently process simple user-entered phrases, right?  If I understand correctly,
it does this by looking at the terms entered and making some decisions about how to join them
with AND or OR.  But it assumes that a term is a whitespace-delimited string, yes?  This is
an incorrect assumption for Chinese.  If instead of making this assumption, dismax ran the
analyzers first to determine what is and isn't a term, then I imagine you would get more predictable
behavior across both whitespace delimited and non-whitespace delimited languages, and you
wouldn't need any "magical" handling for different languages.
> Edismax parser does not honor mm parameter if analyzer splits a token
> ---------------------------------------------------------------------
>                 Key: SOLR-3589
>                 URL:
>             Project: Solr
>          Issue Type: Bug
>          Components: search
>    Affects Versions: 3.6
>            Reporter: Tom Burton-West
> With edismax mm set to 100%  if one of the tokens is split into two tokens by the analyzer
chain (i.e. "fire-fly"  => fire fly), the mm parameter is ignored and the equivalent of
 OR query for "fire OR fly" is produced.
> This is particularly a problem for languages that do not use white space to separate
words such as Chinese or Japenese.
> See these messages for more discussion:

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message