lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (Commented) (JIRA)" <>
Subject [jira] [Commented] (LUCENE-3897) KuromojiTokenizer fails with large docs
Date Wed, 21 Mar 2012 16:11:51 GMT


Michael McCandless commented on LUCENE-3897:

I think the problem is when we force a backtrace (if it's >= 1024 chars since the last
backtrace)... I think we are not correctly pruning all paths in this case.

Unlike the natural backtrace, which happens whenever there is only 1 path (ie the parsing
is unambiguous from that point backwards), the forced backtrace may have more than one live

Have to mull how to fix...
> KuromojiTokenizer fails with large docs
> ---------------------------------------
>                 Key: LUCENE-3897
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: modules/analysis
>            Reporter: Robert Muir
>             Fix For: 3.6, 4.0
> just shoving largeish random docs triggers asserts like:
> {noformat}
>     [junit] Caused by: java.lang.AssertionError: backPos=4100 vs lastBackTracePos=5120
>     [junit] 	at org.apache.lucene.analysis.kuromoji.KuromojiTokenizer.backtrace(
>     [junit] 	at org.apache.lucene.analysis.kuromoji.KuromojiTokenizer.parse(
>     [junit] 	at org.apache.lucene.analysis.kuromoji.KuromojiTokenizer.incrementToken(
>     [junit] 	at org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(
> {noformat}
> But, you get no seed...
> I'll commit the test case and @Ignore it.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message