lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uwe Schindler (Commented) (JIRA)" <>
Subject [jira] [Commented] (SOLR-2917) Support for field-specific tokenizers, token- and character filters in search results clustering
Date Fri, 25 Nov 2011 09:29:40 GMT


Uwe Schindler commented on SOLR-2917:

bq. On the other hand, the schema could define a parallel field with certain filters disabled,
clustering should work nicely with such a stream.

That was the idea behind the suggestion. Highlighter works a litle bit different so it does
not need this: it uses the TermVectors only for finding the highlighting offsets but marks
the highligts in the original text (from a stored field). It just spares to reanalyze again,
which can be expensive if you e.g. use BASIS or whatever heavy analysis.
> Support for field-specific tokenizers, token- and character filters in search results
> ------------------------------------------------------------------------------------------------
>                 Key: SOLR-2917
>                 URL:
>             Project: Solr
>          Issue Type: Improvement
>          Components: contrib - Clustering
>            Reporter: Stanislaw Osinski
>            Assignee: Stanislaw Osinski
>             Fix For: 3.6
> Currently, Carrot2 search results clustering component creates clusters based on the
raw text of a field. The reason for this is that Carrot2 aims to create meaningful cluster
labels by using sequences of words taken directly from the documents' text (including stop
words: _Development of Lucene and Solr_ is more readable than _Development Lucene Solr_).
The easiest way of providing input for such a process was feeding Carrot2 with raw (stored)
document content.
> It is, however, possible to take into account +some+ of the field's filters during clustering.
Because Carrot2 does not currently expose an API for feeding pre-tokenized input, the clustering
component would need to: 
> 1. get raw text of the field, 
> 2. run it through the field's char filters, tokenizers and selected token filters (omitting
e.g. stop words filter and stemmers, Carrot2 needs the original words to produce readable
cluster labels), 
> 3. glue the output back into a string and feed to Carrot2 for clustering. 
> In the future, to eliminate step 3, we could modify Carrot2 to accept pre-tokenized content.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message