hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-900) Regionserver memory leak causing OOME during relatively modest bulk importing
Date Sat, 22 Nov 2008 05:34:44 GMT

    [ https://issues.apache.org/jira/browse/HBASE-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649899#action_12649899
] 

Andrew Purtell commented on HBASE-900:
--------------------------------------

This is a recurring issue presently causing pain on current trunk. Seems to be worse now than
0.18.1. Heap gets out of control (> 1GB) for regionservers hosting only ~20 regions or
so on. Much of the heap is tied up in byte referenced by HSKs referenced by the WritableComparable[]
arrays used by MapFile indexes.

>From a jgray server:

class [B 	3525873 	615313626
class org.apache.hadoop.hbase.HStoreKey 	1605046 	51361472
class java.util.TreeMap$Entry 	1178067 	48300747
class [Lorg.apache.hadoop.io.WritableComparable; 	56 	4216992

Approximately 56 mapfile indexes were resident. Approximately 15-20 regions were being hosted
at the time of the crash. 

On an apurtell server, >900MB of heap was observed to be consumed by mapfile indexes for
48 store files corresponding to 16 regions.


> Regionserver memory leak causing OOME during relatively modest bulk importing
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-900
>                 URL: https://issues.apache.org/jira/browse/HBASE-900
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.2.1, 0.18.0
>            Reporter: Jonathan Gray
>            Assignee: stack
>            Priority: Critical
>         Attachments: memoryOn13.png
>
>
> I have recreated this issue several times and it appears to have been introduced in 0.2.
> During an import to a single table, memory usage of individual region servers grows w/o
bounds and when set to the default 1GB it will eventually die with OOME.  This has happened
to me as well as Daniel Ploeg on the mailing list.  In my case, I have 10 RS nodes and OOME
happens w/ 1GB heap at only about 30-35 regions per RS.  In previous versions, I have imported
to several hundred regions per RS with default heap size.
> I am able to get past this by increasing the max heap to 2GB.  However, the appearance
of this in newer versions leads me to believe there is now some kind of memory leak happening
in the region servers during import.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message