lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Moen (Commented) (JIRA)" <>
Subject [jira] [Commented] (LUCENE-3940) When Japanese (Kuromoji) tokenizer removes a punctuation token it should leave a hole
Date Mon, 02 Apr 2012 10:33:22 GMT


Christian Moen commented on LUCENE-3940:

I'm not familiar with the various considerations that were made with StandardTokenizer, but
please allow me to share some comments anyway.

Perhaps it's useful to distinguish between _analysis for information retrieval_ and _analysis
for information extraction_ here?

I like Michael's and Steven's idea of doing tokenization that doesn't discard any information.
 This is certainly useful in the case of _information extraction_.  For example, if we'd like
to extract noun-phrases based on part-of-speech tags, we don't want to conjoin tokens in case
there's a punctuation character between two nouns (unless that punctuation character is a
middle dot).

Robert is of course correct that we generally don't want to index punctuation characters that
occur in every document, so from an _information retrieval_ point of view, we'd like punctuation
characters removed.

If there's an established convention that Tokenizer variants discards punctuation and produces
the terms that are meant to be directly searchable, it sounds like a good idea that we stick
to the convention here as well.

If there's no established convention, it seems useful that a Tokenizer would provide as much
details as possible with text being input and leave downstream Filters/Analyzers  to remove
whatever is suitable based on a particular processing purpose.  We can provide common ready-to-use
Analyzers with reasonable defaults that users can look to, i.e. to process a specific language
or do another common high-level task with text.  

Hence, perhaps each Tokenizer can decide what makes the most sense to do based on that particular
tokenizer's scope of processing?

To Roberts point, this would leave processing totally arbitrary and consistent, but this would
be _by design_ as it wouldn't be Tokenizer's role to enforce any overall consistency -- i.e.
with regards to punctuation -- higher level Analyzers would provide that.

> When Japanese (Kuromoji) tokenizer removes a punctuation token it should leave a hole
> -------------------------------------------------------------------------------------
>                 Key: LUCENE-3940
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 4.0
>         Attachments: LUCENE-3940.patch, LUCENE-3940.patch, LUCENE-3940.patch, LUCENE-3940.patch
> I modified BaseTokenStreamTestCase to assert that the start/end
> offsets match for graph (posLen > 1) tokens, and this caught a bug in
> Kuromoji when the decompounding of a compound token has a punctuation
> token that's dropped.
> In this case we should leave hole(s) so that the graph is intact, ie,
> the graph should look the same as if the punctuation tokens were not
> initially removed, but then a StopFilter had removed them.
> This also affects tokens that have no compound over them, ie we fail
> to leave a hole today when we remove the punctuation tokens.
> I'm not sure this is serious enough to warrant fixing in 3.6 at the
> last minute...

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message