hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Rodionov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13874) Fix 0.8 being hardcoded sum of blockcache + memstore; doesn't make sense when big heap
Date Tue, 09 Jun 2015 22:54:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14579710#comment-14579710
] 

Vladimir Rodionov commented on HBASE-13874:
-------------------------------------------

[~saint.ack@gmail.com] asked:
{quote}
Vladimir Rodionov you think 1.5g is too low?
{quote}

Yes, we have observed OOME with heaps below 8GB while running M/R jobs(1.5-2GB reserved for
HBase), with standard settings for block cache and memstore in the past (pre-0.98). Can't
say for 0.98+ since heaps are larger now :).  

> Fix 0.8 being hardcoded sum of blockcache + memstore; doesn't make sense when big heap
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-13874
>                 URL: https://issues.apache.org/jira/browse/HBASE-13874
>             Project: HBase
>          Issue Type: Task
>            Reporter: stack
>            Assignee: Esteban Gutierrez
>         Attachments: 0001-HBASE-13874-Fix-0.8-being-hardcoded-sum-of-blockcach.patch
>
>
> Fix this in HBaseConfiguration:
> {code}
>  79   private static void checkForClusterFreeMemoryLimit(Configuration conf) {
>  80       float globalMemstoreLimit = conf.getFloat("hbase.regionserver.global.memstore.upperLimit",
0.4f);
>  81       int gml = (int)(globalMemstoreLimit * CONVERT_TO_PERCENTAGE);
>  82       float blockCacheUpperLimit =
>  83         conf.getFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY,
>  84           HConstants.HFILE_BLOCK_CACHE_SIZE_DEFAULT);
>  85       int bcul = (int)(blockCacheUpperLimit * CONVERT_TO_PERCENTAGE);
>  86       if (CONVERT_TO_PERCENTAGE - (gml + bcul)
>  87               < (int)(CONVERT_TO_PERCENTAGE *
>  88                       HConstants.HBASE_CLUSTER_MINIMUM_MEMORY_THRESHOLD)) {
>  89           throw new RuntimeException(
>  90             "Current heap configuration for MemStore and BlockCache exceeds " +
>  91             "the threshold required for successful cluster operation. " +
>  92             "The combined value cannot exceed 0.8. Please check " +
>  93             "the settings for hbase.regionserver.global.memstore.upperLimit and "
+
>  94             "hfile.block.cache.size in your configuration. " +
>  95             "hbase.regionserver.global.memstore.upperLimit is " +
>  96             globalMemstoreLimit +
>  97             " hfile.block.cache.size is " + blockCacheUpperLimit);
>  98       }
>  99   }
> {code}
> Hardcoding 0.8 doesn't make much sense in a heap of 100G+ (that is 20G over for hbase
itself -- more than enough).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message