lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yusuf Aaji <>
Subject Regarding ArabicLetterTokenizer and the StandardTokenizer - best of both worlds!
Date Fri, 20 Feb 2009 11:22:27 GMT
Hi Everyone,

My question is related to the arabic analysis package under:

It is cool and it is doing a great job, but it uses a special tokenizer: 

The problem with this tokenizer is that it fails to handle emails, urls 
and acronyms the same way the StandardTokenizer does.

Also the problem of the StandardTokenizer is that it fails to handle 
arabic diacritics right. so it splits words which shouldn't be splitted.

Arabic diacritics are: (as mentioned in the class:

FATHATAN = '\u064B';
DAMMATAN = '\u064C';
KASRATAN = '\u064D';
FATHA = '\u064E';
DAMMA = '\u064F';
KASRA = '\u0650';
SHADDA = '\u0651';
SUKUN = '\u0652';

so it is the range [\u064B-\u0652]

Is it possible to modify the StandardTokenizerImp to consider these 
diacritics as normal letters.

I guess it should be done the same way its is done for Chinese and 
Japanese in this line in the file StandardTokenizerImp.jflex

// Chinese and Japanese (but NOT Korean, which is included in [:letter:])

CJ         = 

so it can be something like:

AR = [\u064B-\u0652]

then modify this line also to include our new group of characters:

// From the JFlex manual: "the expression that matches everything of <a> 
not matched by <b> is !(!<a>|<b>)"
LETTER     = !(![:letter:]|{CJ}|{AR})

Am I right?! and am I going in the right direction?!! Comments are very 



To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message