lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Rowe (JIRA)" <>
Subject [jira] [Commented] (SOLR-9185) Solr's "Lucene"/standard query parser should not split on whitespace before sending terms to analysis
Date Fri, 10 Jun 2016 16:27:21 GMT


Steve Rowe commented on SOLR-9185:

bq. I think we need an option that turns the whitespace split off.

I disagree.  I think the current behavior is counter to users' expectations, so we should
just get rid of it.

I suppose we could add luceneMatchVersion-sensitive code and include both versions, but yuck,
I'd much rather not do that.

bq. I think the default behavior in 6.x should remain unchanged. We can change the default
in master.

I disagree.  I think we should change the default behavior ASAP.

bq. The implementation might take a while to become bulletproof. I suspect that the query
parser code relies heavily on the current behavior and that things will break in unexpected
ways when changing that behavior.

Here I agree.  (e)dismax and other parsers that are based on the Solr clone of the Lucene
QP will need work before this change can be released.

> Solr's "Lucene"/standard query parser should not split on whitespace before sending terms
to analysis
> -----------------------------------------------------------------------------------------------------
>                 Key: SOLR-9185
>                 URL:
>             Project: Solr
>          Issue Type: Bug
>            Reporter: Steve Rowe
>            Assignee: Steve Rowe
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace separated term
to its own independent token stream.
> This breaks the following at query-time, because they can't see across whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their charfilters/tokenizers/tokenfilters
will do the same thing at index and querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse around only
real 'operators'.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message