lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shalin Shekhar Mangar (JIRA)" <>
Subject [jira] Commented: (SOLR-1204) Enhance SpellingQueryConverter to handle UTF-8 instead of ASCII only
Date Sat, 06 Jun 2009 05:31:08 GMT


Shalin Shekhar Mangar commented on SOLR-1204:

In order to produce a correct patch, I need to know what are legal field names. It can hardly
be "any UTF-8 string" as that will also contain the colon, which is already used to delimit
field names from query strings. What about digits? Asterisk? Dash (minus)? Underscore? Space?

Lucene does not limit the field names. Those special characters are actually limitations of
our query parser syntax. However, you are right, we need to view them from Solr's point of
view. Let us try to limit this to valid Java identifiers or the closes that we can get to

> Enhance SpellingQueryConverter to handle UTF-8 instead of ASCII only
> --------------------------------------------------------------------
>                 Key: SOLR-1204
>                 URL:
>             Project: Solr
>          Issue Type: Improvement
>          Components: spellchecker
>    Affects Versions: 1.3
>            Reporter: Michael Ludwig
>            Assignee: Shalin Shekhar Mangar
>            Priority: Trivial
>             Fix For: 1.4
>         Attachments:,
> Solr - User - SpellCheckComponent: queryAnalyzerFieldType
> In the above thread, it was suggested to extend the SpellingQueryConverter to cover the
full UTF-8 range instead of handling US-ASCII only. This might be as simple as changing the
regular expression used to tokenize the input string to accept a sequence of one or more Unicode
letters ( \p{L}+ ) instead of a sequence of one or more word characters ( \w+ ).
> See for Java regular
expression reference.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message