lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shay Hummel <>
Subject Re: Text dependent analyzer
Date Wed, 15 Apr 2015 12:12:04 GMT
Hi Ahment,
Thank you for the reply,
That's exactly what I am doing. At the moment, to index a document, I break
it to sentences, and each sentence is analyzed (lemmatizing, stopword
removal etc.)
Now, what I am looking for is a way to create an analyzer (a class which
extends lucene's analyzer). This analyzer will be used for index and query
processing. It (a like the english analyzer) will receive the text and
produce tokens.
The Api of Analyzer requires implementing the createComponents which
is not dependent
on the text being analyzed. This fact is problematic since as you know the
OpenNlp sentence breaking depends on the text it gets (OpenNlp uses the
model files to provide spans of each sentence and then break them).
Is there a way around it?


On Wed, Apr 15, 2015 at 3:50 AM Ahmet Arslan <>

> Hi Hummel,
> You can perform sentence detection outside of the solr, using opennlp for
> instance, and then feed them to solr.
> Ahmet
> On Tuesday, April 14, 2015 8:12 PM, Shay Hummel <>
> wrote:
> Hi
> I would like to create a text dependent analyzer.
> That is, *given a string*, the analyzer will:
> 1. Read the entire text and break it into sentences.
> 2. Each sentence will then be tokenized, possesive removal, lowercased,
> mark terms and stemmed.
> The second part is essentially what happens in english analyzer
> (createComponent). However, this is not dependent of the text it receives -
> which is the first part of what I am trying to do.
> So ... How can it be achieved?
> Thank you,
> Shay Hummel
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message