lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Byrne (JIRA)" <>
Subject [jira] Commented: (LUCENE-2947) NGramTokenizer shouldn't trim whitespace
Date Thu, 03 Mar 2011 16:04:37 GMT


David Byrne commented on LUCENE-2947:

Yeah I was originally planning to implement skip-grams as a seperate tokenizer.  Since we
are re-evaluating ngram tokenization in general, maybe I can come up with an elegant solution.
 Support for positional ngrams is another thing to consider. 

> NGramTokenizer shouldn't trim whitespace
> ----------------------------------------
>                 Key: LUCENE-2947
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: contrib/analyzers
>    Affects Versions: 3.0.3
>            Reporter: David Byrne
>            Priority: Minor
>         Attachments:
> Before I tokenize my strings, I am padding them with white space:
> String foobar = " " + foo + " " + bar + " ";
> When constructing term vectors from ngrams, this strategy has a couple benefits.  First,
it places special emphasis on the starting and ending of a word.  Second, it improves the
similarity between phrases with swapped words.  " foo bar " matches " bar foo " more closely
than "foo bar" matches "bar foo".
> The problem is that Lucene's NGramTokenizer trims whitespace.  This forces me to do some
preprocessing on my strings before I can tokenize them:
> foobar.replaceAll(" ","$"); //arbitrary char not in my data
> This is undocumented, so users won't realize their strings are being trim()'ed, unless
they look through the source, or examine the tokens manually.
> I am proposing NGramTokenizer should be changed to respect whitespace.  Is there a compelling
reason against this?

This message is automatically generated by JIRA.
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message