hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bryan Duxbury (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HBASE-512) Add configuration for global aggregate memcache size
Date Tue, 08 Apr 2008 00:07:24 GMT

     [ https://issues.apache.org/jira/browse/HBASE-512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Bryan Duxbury updated HBASE-512:
--------------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

Committed to trunk.

> Add configuration for global aggregate memcache size
> ----------------------------------------------------
>
>                 Key: HBASE-512
>                 URL: https://issues.apache.org/jira/browse/HBASE-512
>             Project: Hadoop HBase
>          Issue Type: Sub-task
>          Components: regionserver
>            Reporter: Bryan Duxbury
>            Assignee: Bryan Duxbury
>             Fix For: 0.2.0
>
>         Attachments: 512-v2.patch, 512-v3.patch, 512.patch
>
>
> Currently, we have a configuration parameter for the size a Memcache must reach before
it is flushed. This leads to pretty even sized mapfiles when flushes run, which is nice. However,
as noted in the parent issue, we can often get to a point where we run out of memory because
too much data is hanging around in Memcaches.
> I think that we should add a new configuration parameter that governs the total amount
of memory that the region server should spend on Memcaches. This would have to be some number
less than the heap size - we'll have to discover the proper values through experimentation.
Then, when a put comes in, if the global aggregate size of all the Memcaches for all the stores
is at the threshold, then we should block the current and any subsequent put operations from
completing until forced flushes cause the memory usage to go back down to a safe level. The
existing strategy for triggering flushes will still be in play, just augmented with this blocking
behavior.
> This approach has the advantage of helping us avoid OOME situations by warning us well
in advance of overflow. Additionally, it becomes something of a performance tuning knob, allowing
you to allocate more memory to improve write performance. This is superior to the previously
suggested PhantomReference approach because that would possibly causes us to bump into further
OOMEs while we're trying to flush to avoid them. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message