hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dave Latham (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HBASE-9472) If the memstore size is under .1 or greater than .9 the memstore size defaults to the default memstore size
Date Tue, 21 Jul 2015 17:30:05 GMT

     [ https://issues.apache.org/jira/browse/HBASE-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Dave Latham resolved HBASE-9472.
--------------------------------
    Resolution: Duplicate

> If the memstore size is under .1 or greater than .9 the memstore size defaults to the
default memstore size
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-9472
>                 URL: https://issues.apache.org/jira/browse/HBASE-9472
>             Project: HBase
>          Issue Type: Bug
>          Components: BlockCache
>    Affects Versions: 0.94.5, 0.99.0
>            Reporter: churro morales
>         Attachments: HBASE-9742-0.94.patch
>
>
> In HbaseConfiguration.checkForClusterFreeMemoryLimit it does a check to see if the blockCache
+ memstore > .8 this threshold ensures we do not run out of memory.
> But MemStoreFlusher.getMemStoreLimit does this check:
> {code}
> if (limit >= 0.9f || limit < 0.1f) {
>       LOG.warn("Setting global memstore limit to default of " + defaultLimit +
>         " because supplied value outside allowed range of 0.1 -> 0.9");
>       effectiveLimit = defaultLimit;
>     }
> {code}
> In our cluster we had the block cache set to an upper limit of 0.76 and the memstore
upper limit was set to 0.04.  We noticed the memstore size was exceeding the limit we had
set and after looking at the getMemStoreLimit code it seems that the memstore upper limit
is sized to the default value if the configuration value is less than .1 or greater than .9.
 This now makes the block cache and memstore greater than our available heap.
> We can remove the check for the greater than 90% of the heap as this can never happen
due to the check in HbaseConfiguration.checkForClusterFreeMemoryLimit()
> This check doesn't seem necessary anymore as we have the HbaseConfiguration class checking
for the cluster free limit.  Am I correct in this assumption?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message