lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Grant Ingersoll <>
Subject Re: Tokenfilter to decompose compound words
Date Tue, 05 Feb 2008 21:30:23 GMT
Sounds interesting, but is there a way to extract out the FOP code?  
Alternatively, this could be in the contrib area, where the dependency  
would be allowed, although, I still think we aim to keep our Analyzers  
pretty lightweight.  Of course, if the are ASL, we probably could just  
put the classes in the contrib area with a NOTICE file saying they  
were appropriated from FOP (although, I don't know if even that is  

Best thing to do, I think, would be to submit a patch (see the Wiki on  
How To Contribute) and we can have a discussion based on what is there.


On Feb 5, 2008, at 4:08 PM, Thomas Peuss wrote:

> Hello!
> I have nearly finished a tokenfilter to decompose compound words you
> find in many germanic languages (like German, Swedish, ...) into  
> single
> tokens.
> An example: Donaudampfschiff would be decomposed to Donau, dampf,  
> schiff
> so that you can find the word even when you only enter "Schiff".
> I use the hyphenation code from the Apache XML project FOP
> ( to do the first step of
> decomposition. Currently I use the FOP jars directly. I only use a
> handful of classes from the FOP project.
> My question now:
> Would it be OK to copy this classes over to the Lucene project
> (renaming the packages of course) or should I stick with the  
> dependency
> to the FOP jars? The FOP code uses the ASF V2 license as well.
> What do you think?
> CU
> Thomas
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message