hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HBASE-15241) Blockcache hits hbase.ui.blockcache.by.file.max limit and is silent that it will load no more blocks
Date Wed, 10 Feb 2016 00:46:18 GMT

     [ https://issues.apache.org/jira/browse/HBASE-15241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

stack resolved HBASE-15241.
---------------------------
    Resolution: Invalid

Let me start over.

> Blockcache hits hbase.ui.blockcache.by.file.max limit and is silent that it will load
no more blocks 
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-15241
>                 URL: https://issues.apache.org/jira/browse/HBASE-15241
>             Project: HBase
>          Issue Type: Sub-task
>          Components: BucketCache
>            Reporter: stack
>
> We can only load 100k blocks from a file. If 256Gs of SSD and blocks are 4k in size to
align with SSD block read, and you want it all in cache, the 100k  limit gets in the way (The
100k may be absolute limit... checking. In UI I see 100k only). There is a configuration which
lets you up the number per file, hbase.ui.blockcache.by.file.max. This helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message