lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Muir (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (LUCENE-6913) Standard/Classic/UAX tokenizers could be more ram efficient
Date Mon, 30 Nov 2015 03:11:11 GMT

     [ https://issues.apache.org/jira/browse/LUCENE-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Robert Muir updated LUCENE-6913:
--------------------------------
    Attachment: LUCENE-6913.not.a.patch

Just showing what i mean... not actually the way we want to do this. I think ideally we want
to fix jflex to use {{byte[]}} when there are <= 256 character classes?

> Standard/Classic/UAX tokenizers could be more ram efficient
> -----------------------------------------------------------
>
>                 Key: LUCENE-6913
>                 URL: https://issues.apache.org/jira/browse/LUCENE-6913
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Robert Muir
>         Attachments: LUCENE-6913.not.a.patch
>
>
> These tokenizers map codepoints to character classes with the following datastructure
(loaded in clinit):
> {noformat}
>   private static char [] zzUnpackCMap(String packed) {
>     char [] map = new char[0x110000];
> {noformat}
> This requires 2MB RAM for each tokenizer class (in trunk 6MB if all 3 classes are loaded,
in branch_5x 10MB since there are 2 additional backwards compat classes).
> On the other hand, none of our tokenizers actually use a huge number of character classes,
so {{char}} is overkill: e.g. this map can safely be a byte [] and we can save half the memory.
Perhaps it could make these tokenizers faster too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message