hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Billy Pearson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-70) [hbase] memory management
Date Wed, 06 Feb 2008 22:29:08 GMT

    [ https://issues.apache.org/jira/browse/HBASE-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12566380#action_12566380

Billy Pearson commented on HBASE-70:

Memory usage is kind of high

I have HBASE-69 applied to my cluster

One region server had the only 2 regions for the table I have in hbase
the size of the regions mapfiles total about 1.5GB in Linux top command reports region server
using 1.6GB memory

After the splits the regions load on different region servers and the one that was holding
the region was still using 1.6GB of memory
using 1.6GB memory I thank is kind of high I thank as the regions where only that size. 
With this said we should also be looking for what is using so much memory and not releasing

> [hbase] memory management
> -------------------------
>                 Key: HBASE-70
>                 URL: https://issues.apache.org/jira/browse/HBASE-70
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: stack
> Each Store has a Memcache of edits that is flushed on a fixed period (It used to be flushed
when it grew beyond a limit). A Region can be made up of N Stores.  A regionserver has no
upper bound on the number of regions that can be deployed to it currently.  Add to this that
per mapfile, we have read the index into memory.  We're also talking about adding caching
of blocks and cells.
> We need a means of keeping an account of memory usage adjusting cache sizes and flush
rates (or sizes) dynamically -- using References where possible -- to accomodate deployment
of added regions.  If memory is strained, we should reject regions proffered by the master
with a resouce-constrained, or some such, message.
> The manual sizing we currently do ain't going to cut it for clusters of any decent size.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message