hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15240) Go Big BucketCache Fixes
Date Wed, 10 Feb 2016 00:50:18 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15140117#comment-15140117

stack commented on HBASE-15240:

hbase.ui.blockcache.by.file.max is defaulted to 100000. Means 100000 blocks from a file only.
If cache is backed by a big SSD and you want to cache all data, this 100000 cuts in we stop
loading more from the file without emission in the log. It also seems like this 100000 limit
is max on all blocks in the cache. According to the UI it is at least. Dig.

UI gets hosed when I have 2M+ blocks (though we are no longer trying to draw them)

When using prefetch, 'wait' on the queue to write out blocks to cache is hardcoded false so
we will skip out warming cache at startup.

Neither can we set the time to wait around. Currently it is hardcoded at 50ms.

> Go Big BucketCache Fixes
> ------------------------
>                 Key: HBASE-15240
>                 URL: https://issues.apache.org/jira/browse/HBASE-15240
>             Project: HBase
>          Issue Type: Umbrella
>          Components: BucketCache
>            Reporter: stack
>            Assignee: stack
> Umbrella issue to which we will attach issues that prevent bucketcache going big; there's
a few.

This message was sent by Atlassian JIRA

View raw message