hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Feng Honghua (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
Date Wed, 08 Jan 2014 06:00:57 GMT

    [ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865119#comment-13865119
] 

Feng Honghua commented on HBASE-10263:
--------------------------------------

Before this jira, a rough performance comparison estimate can be like this(within a single
regionserver): suppose the total size of in-memory data served by this regionserver is M,
total size of non-in-memory data is N, the block cache size is C, then C/4 is for in-memory
data, 3*C/4 is for non-in-memory data, the cache hit ratio of random read for in-memory data
is C/(4*M), cache hit ratio for non-in-memory data is 3*C/(4*N), so the performance of random
read to these two kinds of data is equal when C/(4*M) == 3*C/(4*N), so:
1. when M > N/3, in-memory table random read performance is worse than ordinary table;
2. when M == N/3, in-memory table random read performance is equal to ordinary table; 
2. when M < N/3, in-memory table random read performance is better than ordinary table;


> make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive
mode for in-memory type block
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-10263
>                 URL: https://issues.apache.org/jira/browse/HBASE-10263
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>            Reporter: Feng Honghua
>            Assignee: Feng Honghua
>         Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, HBASE-10263-trunk_v2.patch
>
>
> currently the single/multi/in-memory ratio in LruBlockCache is hardcoded 1:2:1, which
can lead to somewhat counter-intuition behavior for some user scenario where in-memory table's
read performance is much worse than ordinary table when two tables' data size is almost equal
and larger than regionserver's cache size (we ever did some such experiment and verified that
in-memory table random read performance is two times worse than ordinary table).
> this patch fixes above issue and provides:
> 1. make single/multi/in-memory ratio user-configurable
> 2. provide a configurable switch which can make in-memory block preemptive, by preemptive
means when this switch is on in-memory block can kick out any ordinary block to make room
until no ordinary block, when this switch is off (by default) the behavior is the same as
previous, using single/multi/in-memory ratio to determine evicting.
> by default, above two changes are both off and the behavior keeps the same as before
applying this patch. it's client/user's choice to determine whether or which behavior to use
by enabling one of these two enhancements.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message