hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache
Date Wed, 14 Sep 2016 07:08:20 GMT

    [ https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489646#comment-15489646
] 

ramkrishna.s.vasudevan commented on HBASE-16630:
------------------------------------------------

Nice find. Some simiilar issue was reported with file mode also I believe where it was said
that it is not used efficiently. Need to check that JIRA too. I don't have that ID now.
On the patch
-> The read lock is tried to be released when you get write lock.
-> relocatedCount++; - better to increment after the task is done.
-> 
{code}
backingMap.put(key, new BucketEntry(newOffset, len, bucketEntry.accessCounter,
697	                  bucketEntry.getPriority()));
{code}
Should the key be removed once we get the write lock or is it ok to overwrite the key with
the new value? Am asking in terms of some other request asking for this key at the same time
when the deFragmentation happens.
{code}
 public void setTo(long free, long used, long itemSize,
557	            float nonZeroOccupancyRatio) {
{code}
When does this exact update happen?

Overall is it better if we improve the way the buckets are allocated - will that improve things?
 
Also what is the impact of this deFragmentation in real read load. Because we iterate thro
every key. Is it better if do this in a seperate thread where we hold on to the highest fragmented
bucked in a queue and keep defragmenting that? But  may be it won't work because eviction
is random and not in our hands?


> Fragmentation in long running Bucket Cache
> ------------------------------------------
>
>                 Key: HBASE-16630
>                 URL: https://issues.apache.org/jira/browse/HBASE-16630
>             Project: HBase
>          Issue Type: Bug
>          Components: BucketCache
>    Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>            Reporter: deepankar
>            Assignee: deepankar
>         Attachments: HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are observing cases
where some nodes after some time does not fully utilize the bucket cache, in some cases it
is even worse in the sense they get stuck at a value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR
as all our tables are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic case of fragmentation,
current implementation of BucketCache (mainly BucketAllocator) relies on the logic that fullyFreeBuckets
are available for switching/adjusting cache usage between different bucketSizes . But once
a compaction / bulkload happens and the blocks are evicted from a bucket size , these are
usually evicted from random places of the buckets of a bucketSize and thus locking the number
of buckets associated with a bucketSize and in the worst case of the fragmentation we have
seen some bucketSizes with occupancy ratio of <  10 % But they dont have any completelyFreeBuckets
to share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is more the
MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also done, the eviction (freeSpace
function) will not evict anything and the cache utilization will be stuck at that value without
any allocations for other required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( compaction) of
the bucketSize and thus increasing the occupancy ratio and also freeing up the buckets to
be fullyFree, this logic itself is not complicated as the bucketAllocator takes care of packing
the blocks in the buckets, we need evict and re-allocate the blocks for all the BucketSizes
that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking and I'll
improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message