lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Otis Gospodnetic (JIRA)" <>
Subject [jira] Reopened: (LUCENE-759) Add n-gram tokenizers to contrib/analyzers
Date Fri, 16 Feb 2007 20:02:06 GMT


Otis Gospodnetic reopened LUCENE-759:

    Lucene Fields: [New, Patch Available]  (was: [New])

Reopening, because I'm bringing in Adam Hiatt's modifications that he uploaded in a patch
for SOLR-81.  Adam's changes allow this tokenizer to create n-grams whose sizes are specified
as a min-max range.

This patch fixes a bug in Adam's code, but has another bug that I don't know how to fix now.
Adam's bug:
  input: abcde
  minGram: 1
  maxGram: 3
  output: a ab abc  -- and this is where tokenizing stopped, which was wrong, it should have
continued: b bc bcd c cd cde d de e

Otis' bug:
  input: abcde
  minGeam: 1
  maxGram: 3
  output: e de cde d cd bcd c bc abc b ab -- and this is where tokenizing stops, which is
wrong, it should generate one more n-gram: a

This bug won't hurt SOLR-81, but it should be fixed.

> Add n-gram tokenizers to contrib/analyzers
> ------------------------------------------
>                 Key: LUCENE-759
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>            Reporter: Otis Gospodnetic
>            Priority: Minor
>         Attachments: LUCENE-759.patch
> It would be nice to have some n-gram-capable tokenizers in contrib/analyzers.  Patch
coming shortly.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message