lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nattapong Sirilappanich (JIRA)" <>
Subject [jira] [Commented] (LUCENE-4253) ThaiAnalyzer fail to tokenize word.
Date Fri, 27 Jul 2012 05:31:33 GMT


Nattapong Sirilappanich commented on LUCENE-4253:

I see your point.
However, it is harder than it look.
Correct me if i'm wrong.

As stated in the thesis itself:
This makes retrieval and proper recognition of the documents which contain the phrase "SOME
THAI PHRASE" almost impossible.

It is because Thai text may construct a word from many stop words it that list. Without better
tokenzier, such word will disappear from index.

I don't have a chance to view the thesis that research over those stop words. In my own opinion,
the only set of words that shall not cause a truncated is a set of conjunction words.
> ThaiAnalyzer fail to tokenize word.
> -----------------------------------
>                 Key: LUCENE-4253
>                 URL:
>             Project: Lucene - Core
>          Issue Type: Bug
>          Components: modules/analysis
>    Affects Versions: Realtime Branch
>         Environment: Windows 7 SP1.
> Java 1.7.0-b147
>            Reporter: Nattapong Sirilappanich
> Method 
> protected TokenStreamComponents createComponents(String,Reader)
> return a component that unable to tokenize Thai word.
> The current return statement is:
> return new TokenStreamComponents(source, new StopFilter(matchVersion,        result,
> My experiment is change the return statement to:
> return new TokenStreamComponents(source, result);
> It give me a correct result.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message