lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Peuss (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1166) A tokenfilter to decompose compound words
Date Sun, 17 Feb 2008 15:26:36 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12569708#action_12569708
] 

Thomas Peuss commented on LUCENE-1166:
--------------------------------------

bq. But I'm wondering if a similar approach could be used for, say, word segmentation in Chinese?
That is, iterate through a string of Chinese characters, buffering them and looking up the
buffered string in a Chinese dictionary. Once there is a dictionary match, and the addition
of the following character results in a string that has no entry in the dictionary, that previous
buffered string can be considered a word/token. I'm not sure if your patch does something
like this, but if it does, I am wondering if it is general enough that what you did can be
used (as the basis of) word segmentation for Chinese, and thus for a Chinese Analyzer that's
not just a dump n-gram Analyzer (which is what we have today).

Currently the code adds a token to the stream when an n-gram from the current token in the
token stream matches a word in the dictionary (I am only speaking about the DumbCompoundWordTokenFilter
because I doubt that there exist hyphenation patterns for Chinese languages). I don't know
much about the structure of Chinese characters to answer this questions in detail. You can
have a look at the test-case in the patch to see how the filters work.




> A tokenfilter to decompose compound words
> -----------------------------------------
>
>                 Key: LUCENE-1166
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1166
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Analysis
>            Reporter: Thomas Peuss
>         Attachments: CompoundTokenFilter.patch, CompoundTokenFilter.patch, CompoundTokenFilter.patch,
de.xml, hyphenation.dtd
>
>
> A tokenfilter to decompose compound words you find in many germanic languages (like German,
Swedish, ...) into single tokens.
> An example: Donaudampfschiff would be decomposed to Donau, dampf, schiff so that you
can find the word even when you only enter "Schiff".
> I use the hyphenation code from the Apache XML project FOP (http://xmlgraphics.apache.org/fop/)
to do the first step of decomposition. Currently I use the FOP jars directly. I only use a
handful of classes from the FOP project.
> My question now:
> Would it be OK to copy this classes over to the Lucene project (renaming the packages
of course) or should I stick with the dependency to the FOP jars? The FOP code uses the ASF
V2 license as well.
> What do you think?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message