hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15788) Use Offheap ByteBuffers from BufferPool to read RPC requests.
Date Tue, 08 Nov 2016 18:54:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15648449#comment-15648449

stack commented on HBASE-15788:

bq.  Here we will return back the BBs to pool.

Is this a Runnable for an Executor?

You might want to see some of [~Apache9]'s use of new jdk8 idioms when passing a method to
be run at a later time (smile).

	755	  static interface CallCleanup {
756	    void run();
757	  }

... is a little generic... looks like Thread/Runnable. Yeah, you might be interested in Duo-isms
(can do later).  Meantime, just have the cleanup be a Callable? 

Should the ByteBuff returned support Close or Clean or Release so you don't have to return
two types?

If it is a pain doing a pain out of allocateByteBuffToReadInto and only used in the one place,
maybe just doc that it has side effects and what they are?

> Use Offheap ByteBuffers from BufferPool to read RPC requests.
> -------------------------------------------------------------
>                 Key: HBASE-15788
>                 URL: https://issues.apache.org/jira/browse/HBASE-15788
>             Project: HBase
>          Issue Type: Sub-task
>          Components: regionserver
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: Anoop Sam John
>             Fix For: 2.0.0
>         Attachments: HBASE-15788.patch, HBASE-15788_V4.patch, HBASE-15788_V5.patch, HBASE-15788_V6.patch
> Right now, when an RPC request reaches RpcServer, we read the request into an on demand
created byte[]. When it is write request and including many mutations, the request size will
be some what larger and we end up creating many temp on heap byte[] and causing more GCs.
> We have a ByteBufferPool of fixed sized off heap BBs. This is used at RpcServer while
sending read response only. We can make use of the same while reading reqs also. Instead of
reading whole of the request bytes into a single BB, we can read into N BBs (based on the
req size). When BB is not available from pool, we will fall back to old way of on demand on
heap byte[] creation.
> Remember these are off heap BBs. We read many proto objects from this read request bytes
(like header, Mutation protos etc). Thanks to PB 3 and our shading work as it supports off
heap BB now.  Also the payload cells are also in these DBBs now. The codec decoder can work
on these and create off heap BBs. Whole of our write path work with Cells now. At the time
of addition to memstore, these cells are by default copied to MSLAB ( off heap based pooled
MSLAB issue to follow this one). If MSLAB copy is not possible, we will do a copy to on heap
> One possible down side of this is :
> Before adding to Memstore, we do write to WAL. So the Cells created out of the offheap
BBs (Codec#Decoder) will be used to write to WAL. The default FSHLog works with an OS obtained
from DFSClient. This will have only standard OS write APIs which is byte[] based.  So just
to write to WAL, we will end up in temp on heap copy for each of the Cell. The other WAL imp
(ie. AsynWAL) supports writing offheap Cells directly. We have work in progress to make AsycnWAL
as default. Also we can raise HDFS req to support BB based write APIs in their client OS?
Until then, will try for a temp workaround solution. Patch to say more on this.

This message was sent by Atlassian JIRA

View raw message