hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-4027) Enable direct byte buffers LruBlockCache
Date Tue, 19 Jul 2011 21:40:57 GMT

    [ https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067993#comment-13067993

stack commented on HBASE-4027:

Patch looks excellent!

You want that change to surefire heap size in pom.xml?  Are you saying here that tests won't
pass unless you have 5G of RAM?

In apache code base, you cannot have '@author' tags.

WTF does DoubleBlockCache dooo? (smile).  Usually folks will fill in the class comment on
the what/why a class exists.  Would be good here; e.g. why is it called DoubleBlockCache?
 (Similar for other new classes you've added)

I love this:

+  private LruBlockCache onHeapCache;
+  private SlabCache offHeapCache;

Thats sweet.

Remove code rather than comment it out: e.g. +    // onHeapCache.cacheBlock(blockName, buf);

And when we cache a block, how we know where to put it?  On or off heap?  How you decide where
to put it (Looks like you put it off heap always here).

Any chance of some offheap stats (is this getStats used?)

We only return heapsize of onheap cache.  You think we should not include offheap?

Yeah, a bunch of these state methods go unimplemented.  Can we do any of them?  Or is it that
they just don't make sense in offheap context?

Do the assign when you declare the blockSize and slabs data members rather than wait till
the constructor runs.  Make them final while you are at it.   Similar comment for SingleSizeCache,

What is this limit?  int blocksPerSlab = Integer.MAX_VALUE / blockSize;  Max of 4G per slab?

Should slabsize just be hardcoded as Integer.MAX_VALUE?

The kung fu going on in the shutdown of metaslab needs a comment.  I think I know whats going
on.  Explain what 'cleaner' is.

Is there something up w/ the naming in MetaSlab or is it me misreading?  We are iterating
up to blocksPerSlab but we are creating slabs, not blocks.  I'd think we'd be making blocks
in a slab, not slabs.

You think this is going to happen or how could it happen?

+    if(backingMap.containsKey(blockName)){
+      throw new RuntimeException("Cached an already cached block");
+    }

Whats going on here (I see this in a few places):

+    } catch (Exception e) {
+    }

The get from backingStore will never return a null here?

+      contentBlock = backingStore.get(key).getBlock(key, caching);

Is this a good name for the default blocksize config? "hbase.mapreduce.hfileoutputformat.blocksize"
 The 'outputformat' would seem to come from mapreduce world (Which seems to be where you found
it if I grep src).   SHould we be using DEFAULT_BLOCKSIZE from hfile here instead?

There are some copy paste issues with the javadoc you have in your TestSingleSlabCache.  

Hey Li, you get an F in documentation (smile) but the code is great.  Good stuff.

> Enable direct byte buffers LruBlockCache
> ----------------------------------------
>                 Key: HBASE-4027
>                 URL: https://issues.apache.org/jira/browse/HBASE-4027
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Jason Rutherglen
>            Assignee: Li Pi
>            Priority: Minor
>         Attachments: slabcachepatch.diff, slabcachepatchv2.diff
> Java offers the creation of direct byte buffers which are allocated outside of the heap.
> They need to be manually free'd, which can be accomplished using an documented {{clean}}
> The feature will be optional.  After implementing, we can benchmark for differences in
speed and garbage collection observances.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message