lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Bennett (JIRA)" <>
Subject [jira] Updated: (SOLR-822) CharFilter - normalize characters before tokenizer
Date Thu, 21 May 2009 17:39:45 GMT


Mark Bennett updated SOLR-822:

    Attachment: japanese-h-to-k-mapping.txt

In SOLR-814 it was suggested that some systems might want to normalizes all Hiragana characters
to their Katakana counterpart.

Although this is not universally agreed to, *if* somebody wanted to do it, I believe this
mapping file would peform that task when used with this 822 patch.  I don't speak Japanese
and don't have test content yet, so I'm not 100% it works, but wanted to upload it as a start.

> CharFilter - normalize characters before tokenizer
> --------------------------------------------------
>                 Key: SOLR-822
>                 URL:
>             Project: Solr
>          Issue Type: New Feature
>          Components: Analysis
>    Affects Versions: 1.3
>            Reporter: Koji Sekiguchi
>            Assignee: Koji Sekiguchi
>            Priority: Minor
>             Fix For: 1.4
>         Attachments: character-normalization.JPG, japanese-h-to-k-mapping.txt, sample_mapping_ja.txt,
sample_mapping_ja.txt, SOLR-822-for-1.3.patch, SOLR-822-renameMethod.patch, SOLR-822.patch,
SOLR-822.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch
> A new plugin which can be placed in front of <tokenizer/>.
> {code:xml}
> <fieldType name="textCharNorm" class="solr.TextField" positionIncrementGap="100" >
>   <analyzer>
>     <charFilter class="solr.MappingCharFilterFactory" mapping="mapping_ja.txt" />
>     <tokenizer class="solr.MappingCJKTokenizerFactory"/>
>     <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
>     <filter class="solr.LowerCaseFilterFactory"/>
>   </analyzer>
> </fieldType>
> {code}
> <charFilter/> can be multiple (chained). I'll post a JPEG file to show character
normalization sample soon.
> In Japan, there are two types of tokenizers -- N-gram (CJKTokenizer) and Morphological
> When we use morphological analyzer, because the analyzer uses Japanese dictionary to
detect terms,
> we need to normalize characters before tokenization.
> I'll post a patch soon, too.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message