lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Cutting <>
Subject Re: Token declared final ?
Date Tue, 23 Mar 2004 17:11:36 GMT
The 'type' field of Token would be a good place for Part-of-Speech. 
Does that work for you?  If not, perhaps we should make Token non-final.

As has been discussed before, Lucene uses final for two reasons.  The 
first is historical: long ago it used to make things faster by 
permitting javac to inline things.  The second is that some classes are 
not designed to be subclassed, e.g., subclassing Field or Document will 
generally cause more confusion than it will simplify an application. 
The problem is sometimes determining which case is which.


Thimal Jayasooriya wrote:
> Hi all:
>     I have a question about the class structure of Tokens and 
> Tokenizers. Apologies, it's a bit longwinded :)
>    As part of my Masters research, I'm trying to use Lucene to store 
> different semantic classes found within documents. For this, I need to 
> first split sentences and then generate part of speech (POS) information 
> for each significant word found within a particular document. Through 
> separate libraries, I've already done the splitting and tagging tasks.
>    When I looked at the source for Token 
> (org.apache.lucene.analysis.token), however, I found that it has been 
> declared final. I had intended to subclass Token to also keep a POS 
> marker and use it later within the Analyzer. Could someone please give 
> me some information on why Token was declared as final ? I am sure I've 
> missed something, but I can't see what it is.. Alternately, does it 
> makes more sense to store the POS information elsewhere ? I would 
> probably need it at index time only.
>     My original intention was to extend the Tokenizer 
> (org.apache.lucene.analysis.Tokenizer), get POS information, add it to 
> the token and then do the normal consumption of punctuation and so on 
> with JavaCC. Punctuation is necessary to recognize some named entities, 
> so I need to do this before those tokens are consumed. Is there a better 
> / more logical place to perform POS tagging ?
> Thanks,
> Thimal

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message