hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13874) Fix 0.8 being hardcoded sum of blockcache + memstore; doesn't make sense when big heap
Date Wed, 10 Jun 2015 05:35:01 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14580048#comment-14580048
] 

stack commented on HBASE-13874:
-------------------------------

[~esteban] Patch looks good to me. Don't you need to mention your new config in the RuntimeException
thrown so folks have a fighting chance on figuring what to change to make stuff work?  And
paranoia on my part, how about a little test? I see there is already a TestHMMU... could add
a little one that adds sensible and crazypants values just to ensure does as expected. Fix
checkForClusterFreeMemoryLimit javadoc on commit too (talks about 20%...smile).

> Fix 0.8 being hardcoded sum of blockcache + memstore; doesn't make sense when big heap
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-13874
>                 URL: https://issues.apache.org/jira/browse/HBASE-13874
>             Project: HBase
>          Issue Type: Task
>            Reporter: stack
>            Assignee: Esteban Gutierrez
>         Attachments: 0001-HBASE-13874-Fix-0.8-being-hardcoded-sum-of-blockcach.patch
>
>
> Fix this in HBaseConfiguration:
> {code}
>  79   private static void checkForClusterFreeMemoryLimit(Configuration conf) {
>  80       float globalMemstoreLimit = conf.getFloat("hbase.regionserver.global.memstore.upperLimit",
0.4f);
>  81       int gml = (int)(globalMemstoreLimit * CONVERT_TO_PERCENTAGE);
>  82       float blockCacheUpperLimit =
>  83         conf.getFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY,
>  84           HConstants.HFILE_BLOCK_CACHE_SIZE_DEFAULT);
>  85       int bcul = (int)(blockCacheUpperLimit * CONVERT_TO_PERCENTAGE);
>  86       if (CONVERT_TO_PERCENTAGE - (gml + bcul)
>  87               < (int)(CONVERT_TO_PERCENTAGE *
>  88                       HConstants.HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD)) {
>  89           throw new RuntimeException(
>  90             "Current heap configuration for MemStore and BlockCache exceeds " +
>  91             "the threshold required for successful cluster operation. " +
>  92             "The combined value cannot exceed 0.8. Please check " +
>  93             "the settings for hbase.regionserver.global.memstore.upperLimit and "
+
>  94             "hfile.block.cache.size in your configuration. " +
>  95             "hbase.regionserver.global.memstore.upperLimit is " +
>  96             globalMemstoreLimit +
>  97             " hfile.block.cache.size is " + blockCacheUpperLimit);
>  98       }
>  99   }
> {code}
> Hardcoding 0.8 doesn't make much sense in a heap of 100G+ (that is 20G over for hbase
itself -- more than enough).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message