hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-9472) If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size
Date Fri, 08 Aug 2014 18:47:14 GMT

    [ https://issues.apache.org/jira/browse/HBASE-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14091136#comment-14091136

stack commented on HBASE-9472:

[~churromorales] Thanks for having a go at fixing this bug of ours.  Doing config check in
HBaseConfiguration is good enough I'd say.  No need of the dup check -- especially if it is
in disagreement w/ the HBC check.  I think the memory math probably better belongs in our
memory util classes ratther than in HBC.  Trunk patch would be great.

> If the memstore size is under .1 or greater than .9 the memstore size defaults to the
default memstore size
> -----------------------------------------------------------------------------------------------------------
>                 Key: HBASE-9472
>                 URL: https://issues.apache.org/jira/browse/HBASE-9472
>             Project: HBase
>          Issue Type: Bug
>          Components: BlockCache
>    Affects Versions: 0.94.5, 0.99.0
>            Reporter: churro morales
>         Attachments: HBASE-9742-0.94.patch
> In HbaseConfiguration.checkForClusterFreeMemoryLimit it does a check to see if the blockCache
+ memstore > .8 this threshold ensures we do not run out of memory.
> But MemStoreFlusher.getMemStoreLimit does this check:
> {code}
> if (limit >= 0.9f || limit < 0.1f) {
>       LOG.warn("Setting global memstore limit to default of " + defaultLimit +
>         " because supplied value outside allowed range of 0.1 -> 0.9");
>       effectiveLimit = defaultLimit;
>     }
> {code}
> In our cluster we had the block cache set to an upper limit of 0.76 and the memstore
upper limit was set to 0.04.  We noticed the memstore size was exceeding the limit we had
set and after looking at the getMemStoreLimit code it seems that the memstore upper limit
is sized to the default value if the configuration value is less than .1 or greater than .9.
 This now makes the block cache and memstore greater than our available heap.
> We can remove the check for the greater than 90% of the heap as this can never happen
due to the check in HbaseConfiguration.checkForClusterFreeMemoryLimit()
> This check doesn't seem necessary anymore as we have the HbaseConfiguration class checking
for the cluster free limit.  Am I correct in this assumption?

This message was sent by Atlassian JIRA

View raw message