lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Hiatt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-759) Add n-gram tokenizers to contrib/analyzers
Date Fri, 16 Feb 2007 23:11:05 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473851
] 

Adam Hiatt commented on LUCENE-759:
-----------------------------------

Otis: this really isn't a bug. The min/max gram code I added only applied to the EdgeNGramTokenizer.
 I only want to generate _edge_ n-grams between the range of sizes provided.

For example, with the EdgeNGramTokenizer
 input: abcde
  minGram: 1
  maxGram: 3 

'a ab abc' is in fact what I intended to produce.

I think it makes more sense for the functionality to which you referred to be located in NGramTokenizer.


> Add n-gram tokenizers to contrib/analyzers
> ------------------------------------------
>
>                 Key: LUCENE-759
>                 URL: https://issues.apache.org/jira/browse/LUCENE-759
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>            Reporter: Otis Gospodnetic
>            Priority: Minor
>         Attachments: LUCENE-759.patch, LUCENE-759.patch
>
>
> It would be nice to have some n-gram-capable tokenizers in contrib/analyzers.  Patch
coming shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message