lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <>
Subject [jira] Commented: (LUCENE-2167) Implement StandardTokenizer with the UAX#29 Standard
Date Fri, 14 May 2010 21:41:42 GMT


Robert Muir commented on LUCENE-2167:

bq. Do you mean URL-as-token should not be attempted now? Or just this URL-breaking filter?

We can always add tailorings later, as Uwe has implemented Version-based support.

Personally I see no problems with this patch, and I think we should look at tying this in
as-is as the new StandardTokenizer, still backwards compatible thanks to Version support (we
can just invoke EnglishTokenizerImpl in that case).

I still want to rip StandardTokenizer out of lucene core and into modules. I think thats not
too far away and its probably better to do this afterwards?, but we can do it before that
time if you want, doesn't matter to me.

It will be great to have StandardTokenizer working for non-European languages out of box!

> Implement StandardTokenizer with the UAX#29 Standard
> ----------------------------------------------------
>                 Key: LUCENE-2167
>                 URL:
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: contrib/analyzers
>    Affects Versions: 3.1
>            Reporter: Shyamal Prasad
>            Assignee: Steven Rowe
>            Priority: Minor
>         Attachments: LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch,
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
> It would be really nice for StandardTokenizer to adhere straight to the standard as much
as we can with jflex. Then its name would actually make sense.
> Such a transition would involve renaming the old StandardTokenizer to EuropeanTokenizer,
as its javadoc claims:
> bq. This should be a good tokenizer for most European-language documents
> The new StandardTokenizer could then say
> bq. This should be a good tokenizer for most languages.
> All the english/euro-centric stuff like the acronym/company/apostrophe stuff can stay
with that EuropeanTokenizer, and it could be used by the european analyzers.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message