lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Bowen (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1489) highlighter problem with n-gram tokens
Date Fri, 02 Oct 2009 01:05:23 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12761441#action_12761441
] 

David Bowen commented on LUCENE-1489:
-------------------------------------

By the way. here is the output from Chris's test program with this patch:
{code}
Testing analyzer Bigram shingle analyzer (bigrams and unigrams)...
---------------------------------
<B>Lucene</B> can index and can search [query='Lucene']
Lucene <B>can</B> make an index [query='can']
Lucene <B>can</B> index and <B>can</B> search [query='can']
Lucene <B>can</B> index <B>can</B> search and <B>can</B>
highlight [query='can']
Lucene can <B>index</B> can <B>search</B> and can highlight [query='+index
+search']

Testing analyzer Bigram (non-shingle) analyzer (bigrams only)...
---------------------------------
<B>Lucene</B> can index and can search [query='Lucene']
Lucene <B>can</B> make an index [query='can']
Lucene <B>can</B> index and <B>can</B> search [query='can']
Lucene <B>can</B> index <B>can</B> search and <B>can</B>
highlight [query='can']
Lucene can <B>index</B> can <B>search</B> and can highlight [query='+index
+search']
{code}



> highlighter problem with n-gram tokens
> --------------------------------------
>
>                 Key: LUCENE-1489
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1489
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: contrib/highlighter
>            Reporter: Koji Sekiguchi
>            Priority: Minor
>         Attachments: lucene1489.patch
>
>
> I have a problem when using n-gram and highlighter. I thought it had been solved in LUCENE-627...
> Actually, I found this problem when I was using CJKTokenizer on Solr, though, here is
lucene program to reproduce it using NGramTokenizer(min=2,max=2) instead of CJKTokenizer:
> {code:java}
> public class TestNGramHighlighter {
>   public static void main(String[] args) throws Exception {
>     Analyzer analyzer = new NGramAnalyzer();
>     final String TEXT = "Lucene can make index. Then Lucene can search.";
>     final String QUERY = "can";
>     QueryParser parser = new QueryParser("f",analyzer);
>     Query query = parser.parse(QUERY);
>     QueryScorer scorer = new QueryScorer(query,"f");
>     Highlighter h = new Highlighter( scorer );
>     System.out.println( h.getBestFragment(analyzer, "f", TEXT) );
>   }
>   static class NGramAnalyzer extends Analyzer {
>     public TokenStream tokenStream(String field, Reader input) {
>       return new NGramTokenizer(input,2,2);
>     }
>   }
> }
> {code}
> expected output is:
> Lucene <B>can</B> make index. Then Lucene <B>can</B> search.
> but the actual output is:
> Lucene <B>can make index. Then Lucene can</B> search.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message