hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Anoop Sam John (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets
Date Wed, 15 Mar 2017 00:32:41 GMT

    [ https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15925331#comment-15925331
] 

Anoop Sam John commented on HBASE-16859:
----------------------------------------

{code}
**
34	   * Writes <code>len</code> bytes from the specified ByteBuffer starting at
the current position.
35	   * Note that it does not change the position of the specified ByteBuffer
36	   * @param b the data.
37	   * @exception IOException if an I/O error occurs.
38	   */
39	  default public void write(ByteBuffer b) throws IOException {
40	    write(b, b.position(), b.remaining());
41	  }
{code}
This default implementation will not change the src BB position.
{code}
@Override
56	  public void write(ByteBuffer b) throws IOException {
57	    this.buff.put(b);
58	  }
{code}
But this impl class impl will change the position. We should stick with one model.
{code}

512	      // TODO : Should we check for totalPBSize >= minSizeForReservoirUse
513	      if (cellBlockStream == null && this.rpcCallback != null && !isClientCellBlockSupported())
{
....
 } else {
 ByteBuffer possiblePBBuf = (cellBlockSize > 0) ? cellBlock.get(cellBlock.size() - 1) :
null;
 ...
 
 {code}
 Seems to have code duplicates and all..  Can we just unify the logic around this pls?
 CellBlock in use or not, we tend to use BBs from reservoir here. So in any case, we can try
to use N BBs not just one. (The if case try doing that with usage of ByteBuffOutputStream).
In case of CellBlock, the 1st BB to consider is the last one in CellBlock. We dont want to
waste the extra space if available there. When CellBlock not in place, this 1st possible BB
is null onlt and we start try getting from reservoir. And also we have to check remaining
BB size need against minSizeForReservoirUse. Not just 1st at top level. Like one BB size is
say 100 and the need is 101 bytes, we get a BB from pool and remaining size need is just 1
byte and better not waste BB for that.  Some cleanup can be done in this area of code (Code
as in this patch) unifying both the if and else flow.

> Use Bytebuffer pool for non java clients specifically for scans/gets
> --------------------------------------------------------------------
>
>                 Key: HBASE-16859
>                 URL: https://issues.apache.org/jira/browse/HBASE-16859
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 2.0.0
>
>         Attachments: HBASE-16859_V1.patch, HBASE-16859_V2.patch, HBASE-16859_V2.patch,
HBASE-16859_V4.patch, HBASE-16859_V5.patch, HBASE-16859_V6.patch, HBASE-16859_V7.patch
>
>
> In case of non java clients we still write the results and header into a on demand  byte[].
This can be changed to use the BBPool (onheap or offheap buffer?).
> But the basic problem is to identify if the response is for scans/gets. 
> - One easy way to do it is use the MethodDescriptor per Call and use the   name of the
MethodDescriptor to identify it is a scan/get. But this will pollute RpcServer by checking
for scan/get type response.
> - Other way is always set the result to cellScanner but we know that isClientCellBlockSupported
is going to false for non PB clients. So ignore the cellscanner and go ahead with the results
in PB. But this is not clean
> - third one is that we already have a RpccallContext being passed to the RS. In case
of scan/gets/multiGets we already set a Rpccallback for shipped call. So here on response
we can check if the callback is not null and check for isclientBlockSupported. In this case
we can get the BB from the pool and write the result and header to that BB. May be this looks
clean?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message