lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Byrne (JIRA)" <>
Subject [jira] Commented: (LUCENE-2947) NGramTokenizer shouldn't trim whitespace
Date Mon, 14 Mar 2011 13:13:29 GMT


David Byrne commented on LUCENE-2947:

Has anybody had a chance to take a look at this patch?

Here's my real world example of this patch in action:

> NGramTokenizer shouldn't trim whitespace
> ----------------------------------------
>                 Key: LUCENE-2947
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: contrib/analyzers
>    Affects Versions: 3.0.3
>            Reporter: David Byrne
>            Priority: Minor
>         Attachments: LUCENE-2947.patch,
> Before I tokenize my strings, I am padding them with white space:
> String foobar = " " + foo + " " + bar + " ";
> When constructing term vectors from ngrams, this strategy has a couple benefits.  First,
it places special emphasis on the starting and ending of a word.  Second, it improves the
similarity between phrases with swapped words.  " foo bar " matches " bar foo " more closely
than "foo bar" matches "bar foo".
> The problem is that Lucene's NGramTokenizer trims whitespace.  This forces me to do some
preprocessing on my strings before I can tokenize them:
> foobar.replaceAll(" ","$"); //arbitrary char not in my data
> This is undocumented, so users won't realize their strings are being trim()'ed, unless
they look through the source, or examine the tokens manually.
> I am proposing NGramTokenizer should be changed to respect whitespace.  Is there a compelling
reason against this?

This message is automatically generated by JIRA.
For more information on JIRA, see:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message