lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <>
Subject [jira] Commented: (LUCENE-2909) NGramTokenFilter may generate offsets that exceed the length of original text
Date Mon, 07 Feb 2011 10:13:32 GMT


Robert Muir commented on LUCENE-2909:

You are right, some stemmers increase the size, so this assumption that end - start = termAtt.length
is a problem.

So, between this and LUCENE-2208, I think we need to add some more checks/asserts to BaseTokenStreamTestCase
(at least to validate offset < end, but maybe some other ideas?)

If the highlighter hits this condition, it (rightfully) complains and throws an exception,
among other problems. So I think we need to improve this situation everywhere.

> NGramTokenFilter may generate offsets that exceed the length of original text
> -----------------------------------------------------------------------------
>                 Key: LUCENE-2909
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: contrib/analyzers
>    Affects Versions: 2.9.4
>            Reporter: Shinya Kasatani
>            Assignee: Koji Sekiguchi
>            Priority: Minor
>         Attachments: TokenFilterOffset.patch
> Whan using NGramTokenFilter combined with CharFilters that lengthen the original text
(such as "ß" -> "ss"), the generated offsets exceed the length of the origianal text.
> This causes InvalidTokenOffsetsException when you try to highlight the text in Solr.
> While it is not possible to know the accurate offset of each character once you tokenize
the whole text with tokenizers like KeywordTokenizer, NGramTokenFilter should at least avoid
generating invalid offsets.

This message is automatically generated by JIRA.
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message