lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marvin Humphrey (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1859) TermAttributeImpl's buffer will never "shrink" if it grows too big
Date Wed, 26 Aug 2009 18:23:59 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12748064#action_12748064
] 

Marvin Humphrey commented on LUCENE-1859:
-----------------------------------------

The worst-case scenario seems kind of theoretical, since there are so many
reasons that huge tokens are impractical. (Is a priority of "major"
justified?) If there's a significant benefit to shrinking the allocation, it's
minimizing average memory usage over time.  But even that assumes a nearly
pathological distribution in field size -- it would have to be large for early
documents, then consistently small for subsequent documents.  If it's
scattered, you have to plan for worst case RAM usage as an app developer,
anyway.  Which generally means limiting token size.

I assume that, based on this report, TermAttributeImpl never gets reset or
discarded/recreated over the course of an indexing session?

-0 if the reallocation happens no more often than once per document.

-1 if it the reallocation has be performed in an inner loop.

> TermAttributeImpl's buffer will never "shrink" if it grows too big
> ------------------------------------------------------------------
>
>                 Key: LUCENE-1859
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1859
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Analysis
>    Affects Versions: 2.9
>            Reporter: Tim Smith
>
> This was also an issue with Token previously as well
> If a TermAttributeImpl is populated with a very long buffer, it will never be able to
reclaim this memory
> Obviously, it can be argued that Tokenizer's should never emit "large" tokens, however
it seems that the TermAttributeImpl should have a reasonable static "MAX_BUFFER_SIZE" such
that if the term buffer grows bigger than this, it will shrink back down to this size once
the next token smaller than MAX_BUFFER_SIZE is set
> I don't think i have actually encountered issues with this yet, however it seems like
if you have multiple indexing threads, you could end up with a char[Integer.MAX_VALUE] per
thread (in the very worst case scenario)
> perhaps growTermBuffer should have the logic to shrink if the buffer is currently larger
than MAX_BUFFER_SIZE and it needs less than MAX_BUFFER_SIZE

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message