hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HBASE-900) Regionserver memory leak causing OOME during relatively modest bulk importing
Date Wed, 15 Oct 2008 18:02:46 GMT

     [ https://issues.apache.org/jira/browse/HBASE-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

stack updated HBASE-900:

    Attachment: memoryOn13.png

I ran randomread test over night w/ gc logging enabled.  Here are snippets from the gc log
from different times during the night showing full gcs:

3738.529: [Full GC 107893K->86326K(220480K), 0.3393940 secs]
3944.907: [Full GC 110079K->90694K(212160K), 0.3828950 secs]
43142.078: [Full GC 105996K->82458K(139840K), 0.3558530 secs]
43339.019: [Full GC 102767K->86387K(190656K), 0.3512450 secs]
43490.046: [Full GC 105187K->87709K(212288K), 0.3523640 secs]
43735.589: [Full GC 107799K->88233K(174784K), 0.3547080 secs]
25003.983: [Full GC 105412K->87523K(205312K), 0.3559230 secs]
25139.998: [Full GC 106102K->80911K(131712K), 0.3432420 secs]
47924.811: [Full GC 105487K->80566K(148864K), 0.3392500 secs]
48088.641: [Full GC 98025K->86603K(212736K), 0.3439750 secs]
48338.127: [Full GC 105214K->87088K(159872K), 0.3481490 secs]

Its holding pretty steady.

I also attached memory graph from ganglia over night.  Shows nothing untoward.

> Regionserver memory leak causing OOME during relatively modest bulk importing
> -----------------------------------------------------------------------------
>                 Key: HBASE-900
>                 URL: https://issues.apache.org/jira/browse/HBASE-900
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.2.1, 0.18.0
>            Reporter: Jonathan Gray
>            Assignee: stack
>            Priority: Critical
>             Fix For: 0.18.1
>         Attachments: memoryOn13.png
> I have recreated this issue several times and it appears to have been introduced in 0.2.
> During an import to a single table, memory usage of individual region servers grows w/o
bounds and when set to the default 1GB it will eventually die with OOME.  This has happened
to me as well as Daniel Ploeg on the mailing list.  In my case, I have 10 RS nodes and OOME
happens w/ 1GB heap at only about 30-35 regions per RS.  In previous versions, I have imported
to several hundred regions per RS with default heap size.
> I am able to get past this by increasing the max heap to 2GB.  However, the appearance
of this in newer versions leads me to believe there is now some kind of memory leak happening
in the region servers during import.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message