accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Elser (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-1770) out of memory error on very long running tablet server
Date Fri, 11 Oct 2013 15:28:42 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792720#comment-13792720
] 

Josh Elser commented on ACCUMULO-1770:
--------------------------------------

bq. I'm not sure what to make of this.

I would have to agree with you there.

I think I did a test recently just to generate 1B entries in a table 10M rows with 10CFs and
10CQs in each row. I just used a BatchWriter on a single box. I gave the memory maps something
like 16G and even after compaction, resident usage was still high. I'll have to see if I can
reproduce that.

You said a large cluster -- perhaps a missing factor is concurrency of writes to one memory
map? Maybe something different is happening when one tserver is trying to service many large
writes at once?

> out of memory error on very long running tablet server
> ------------------------------------------------------
>
>                 Key: ACCUMULO-1770
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1770
>             Project: Accumulo
>          Issue Type: Bug
>          Components: tserver
>            Reporter: Eric Newton
>            Assignee: Eric Newton
>         Attachments: FragmentTest.java, memory-usage.png
>
>
> On a large cluster it was noticed that a few of the tablet servers had been pushed into
swap.  This didn't effect the performance of the server until it ran out of memory, and the
process was killed.  The gc reports in the debug log showed the system had plenty of heap
space for the JVM.  The number of threads in the server were not excessive (dozens).  This
cluster ingests some large values (megabytes).  The tablet server had been up for a month
prior to running out of memory.  MALLOC_ARENA_MAX had already been set to 1.
> * Investigate the effect of fragmentation on memory usage for large value inserts.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message