lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (LUCENE-7393) Incorrect ICUTokenization on South East Asian Language
Date Tue, 26 Jul 2016 13:12:20 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-7393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393761#comment-15393761
] 

Robert Muir commented on LUCENE-7393:
-------------------------------------

{quote}
To clarify, you meant 1% is for rule base syllable segmentation correct?
{quote}

Yes: it is unmodified as before but I did some inspection of it. It handles all common structures
but has no rules for rarer cases mentioned in that study: syllable chaining, great sa, etc.

{quote}
Rule base syllable algorithm is nearly to its perfection in Lucene and I'm satisfied with
it. Just also curious, where did you got the rules?
{quote}

As I mentioned earlier, I created these almost 7 years ago informally. This is why I was eager
to remove these rules, because we know they are not perfect. They were created when Myanmar
in unicode was still rapidly changing, and I didn't find such formal algorithms at the time.

The rules are done in a "unicode way", really just using the base consonant and tries to let
unicode properties take care of the rest (Word_Break=Extend, etc). It is really not much more
than just this main part:

{noformat}
$Cons = [[:Other_Letter:]&[:Myanmar:]];
$Virama = [\u1039];
$Asat = [\u103A];

$ConsEx = $Cons ($Extend | $Format)*;
$AsatEx = $Cons $Asat ($Virama $ConsEx)? ($Extend | $Format)*;
$MyanmarSyllableEx = $ConsEx ($Virama $ConsEx)? ($AsatEx)*;
{noformat}

{quote}
I didn't see the patch link though. 
{quote}

See the top of this issue: there is an Attachments section, underneath the Description section.

> Incorrect ICUTokenization on South East Asian Language
> ------------------------------------------------------
>
>                 Key: LUCENE-7393
>                 URL: https://issues.apache.org/jira/browse/LUCENE-7393
>             Project: Lucene - Core
>          Issue Type: Bug
>          Components: modules/analysis
>    Affects Versions: 5.5
>         Environment: Ubuntu
>            Reporter: AM
>         Attachments: LUCENE-7393.patch
>
>
> Lucene 4.10.3 correctly tokenize a syllable into one token.  However in Lucune 5.5.0
it end up being two tokens which is incorrect.  Please let me know segmentation rules are
implemented by native speakers of a particular language? In this particular example, it is
M-y-a-n-m-a-r language.  I have understood that L-a-o, K-m-e-r and M-y-a-n-m-a-r fall into
ICU category.  Thanks a lot.
> h4. Example 4.10.3
> {code:javascript}
> GET _analyze?tokenizer=icu_tokenizer&text="နည်"
> {
>    "tokens": [
>       {
>          "token": "နည်",
>          "start_offset": 1,
>          "end_offset": 4,
>          "type": "<ALPHANUM>",
>          "position": 1
>       }
>    ]
> }
> {code}
> h4. Example 5.5.0
> {code:javascript}
> GET _analyze?tokenizer=icu_tokenizer&text="နည်"
> {
>   "tokens": [
>     {
>       "token": "န",
>       "start_offset": 0,
>       "end_offset": 1,
>       "type": "<ALPHANUM>",
>       "position": 0
>     },
>     {
>       "token": "ည်",
>       "start_offset": 1,
>       "end_offset": 3,
>       "type": "<ALPHANUM>",
>       "position": 1
>     }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message