lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Otis Gospodnetic (JIRA)" <>
Subject [jira] Updated: (LUCENE-1216) CharDelimiterTokenizer
Date Wed, 14 May 2008 05:53:56 GMT


Otis Gospodnetic updated LUCENE-1216:

         Priority: Minor  (was: Major)
    Lucene Fields: [New, Patch Available]  (was: [Patch Available, New])

> CharDelimiterTokenizer
> ----------------------
>                 Key: LUCENE-1216
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>            Reporter: Hiroaki Kawai
>            Assignee: Otis Gospodnetic
>            Priority: Minor
>         Attachments:,,
> WhitespaceTokenizer is very useful for space separated languages, but my Japanese text
is not always separated by a space. So, I created an alternative Tokenizer that we can specify
the delimiter. The file submitted will be an improvement of the current WhitespaceTokenizer.
> I tried to extend it from CharTokenizer, but CharTokenizer has a limitation that a token
can't be longer than 255 chars.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message