lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-2167) StandardTokenizer Javadoc does not correctly describe tokenization around punctuation characters
Date Wed, 24 Feb 2010 23:47:28 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12838094#action_12838094
] 

Robert Muir commented on LUCENE-2167:
-------------------------------------

Steven, thanks for providing the link.

I guess this is the point where I also say, I think it would be really nice for StandardTokenizer
to adhere straight to the standard as much as we can with jflex (I realize in 1.5, we won't
have > 0xffff support). Then its name would actually make sense.

In my opinion, such a transition would involve something like renaming the old StandardTokenizer
to EuropeanTokenizer, as its javadoc claims:
{code}
This should be a good tokenizer for most European-language documents
{code}

The new StandardTokenizer could then say
{code}
This should be a good tokenizer for most languages.
{code}

All the english/euro-centric stuff like the acronym/company/apostrophe stuff could stay with
that "EuropeanTokenizer" or whatever its called, and it could be used by the european analyzers.

but if we implement the Unicode rules, I think we should drop all this english/euro-centric
stuff for StandardTokenizer. Otherwise it should be called *StandardishTokenizer*.

we can obviously preserve the backwards compat with Version, as Uwe has created a way to use
a different grammar for a different Version.

I expect some -1 to this, waiting comments :)

> StandardTokenizer Javadoc does not correctly describe tokenization around punctuation
characters
> ------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-2167
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2167
>             Project: Lucene - Java
>          Issue Type: Bug
>    Affects Versions: 2.4.1, 2.9, 2.9.1, 3.0
>            Reporter: Shyamal Prasad
>            Priority: Minor
>         Attachments: LUCENE-2167.patch, LUCENE-2167.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> The Javadoc for StandardTokenizer states:
> {quote}
> Splits words at punctuation characters, removing punctuation. 
> However, a dot that's not followed by whitespace is considered part of a token.
> Splits words at hyphens, unless there's a number in the token, in which case the whole

> token is interpreted as a product number and is not split.
> {quote}
> This is not accurate. The actual JFlex implementation treats hyphens interchangeably
with
> punctuation. So, for example "video,mp4,test" results in a *single* token and not three
tokens
> as the documentation would suggest.
> Additionally, the documentation suggests that "video-mp4-test-again" would become a single
> token, but in reality it results in two tokens: "video-mp4-test" and "again".
> IMHO the parser implementation is fine as is since it is hard to keep everyone happy,
but it is probably
> worth cleaning up the documentation string. 
> The patch included here updates the documentation string and adds a few test cases to
confirm the cases described above.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message