lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hiroaki Kawai (JIRA)" <>
Subject [jira] Updated: (LUCENE-1216) CharDelimiterTokenizer
Date Mon, 04 Aug 2008 10:07:44 GMT


Hiroaki Kawai updated LUCENE-1216:


I'm sorry for delay. Added a comment about what white space is.

> CharDelimiterTokenizer
> ----------------------
>                 Key: LUCENE-1216
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>            Reporter: Hiroaki Kawai
>            Assignee: Otis Gospodnetic
>            Priority: Minor
>         Attachments:,,,,
> WhitespaceTokenizer is very useful for space separated languages, but my Japanese text
is not always separated by a space. So, I created an alternative Tokenizer that we can specify
the delimiter. The file submitted will be an improvement of the current WhitespaceTokenizer.
> I tried to extend it from CharTokenizer, but CharTokenizer has a limitation that a token
can't be longer than 255 chars.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message