accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keith Turner (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-624) iterators may open lots of compressors
Date Thu, 21 Jun 2012 16:27:44 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398538#comment-13398538
] 

Keith Turner commented on ACCUMULO-624:
---------------------------------------

A work around for this issue is to enable the block cache for the table.  This will cause
rfile blocks to be read into memory and closed immediately, releasing the decompressor.  A
decompressor will not be kept for each deep copy in this case. 

I did some test with the intersecting iterator to verify this.  W/o cache querying 5 terms
would allocate 5 decompressors.  W/ cache only one decompressor was allocated.  
                
> iterators may open lots of compressors
> --------------------------------------
>
>                 Key: ACCUMULO-624
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-624
>             Project: Accumulo
>          Issue Type: Bug
>          Components: tserver
>            Reporter: Eric Newton
>            Assignee: Keith Turner
>
> A large iterator tree may create many instances of Compressors.  These instances are
pulled from a pool that never decreases in size.  So, if 50 simultaneous queries are run over
dozens of files, each with a complex iterator stack, there will be thousands of compressors
created.  Each of these holds a large buffer.  This can cause the server to run out of memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message