lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erik Hatcher <>
Subject Re: Did you mean...
Date Tue, 17 Feb 2004 14:18:53 GMT
On Feb 17, 2004, at 6:53 AM, wrote:
> On Monday 16 February 2004 20:56, Erik Hatcher wrote:
>> On Feb 16, 2004, at 9:50 AM, wrote:
>>> TokenStream in = new WhitespaceAnalyzer().tokenStream("contents", new
>>> StringReader(doc.getField("contents").stringValue()));
>> The field is the field name.  No built-in analyzers use it, but custom
>> analyzers could key off of it to do field-specific analysis.  Look at
> If I want to tokenize all Fields I would have to get a tokenStream of 
> each
> Field seperately and process them seperately? Or can I get one "master
> stream" that compounds all Fields?

You would do them separately.  I'm not clear on what you are trying to 
do.  The Analyzer does all this during indexing automatically for you, 
but it sounds like you are just trying to emulate what an Analyzer 
already does to extract words from text?


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message