hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14490) [RpcServer] reuse request read buffer
Date Fri, 12 Feb 2016 04:59:18 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144048#comment-15144048
] 

ramkrishna.s.vasudevan commented on HBASE-14490:
------------------------------------------------

Just yesterday was working on this area of the code. 
{code}
        data = reqBufPool.getBuffer();
1574	        if (data.capacity() < dataLength) {
1575	          data = ByteBuffer.allocate(dataLength);
1576	        } else {
1577	          data.limit(dataLength);
1578	        }
{code}
This step of limiting is very important without which things does not work correctly. 
As Anoop said returning the pool in the finally, is not correct. When you turn on cellblocks
in the write request, things will get messed up. Tested that and it has to be done either
when the call is completed or we need to have a mechanism as to when we get rid of the BB.
For now Call.done() should be okie.
But there are other things also in my opinion to see as to what should be the maximum capacity
of this BBPool. Do we need to create another pool for read and write seperately? IMHO I feel
yes. But Anoop had other suggestions in terms of GC holding to these pools and another is
that we don't know if it is write/read request. But still I think we can manage it but GC
holding to these pools is what I need to evaluate. Will check on that.
Another thing to note is that, in case we are creating DBB from this pool and we are not able
to add it  back to the pool in case of increase in capacity how will those DBB be GCed? Hence
suggesting a suitable capacity for this pool is very important.
DBB pool avoids copying that Oracle impl does. So +1 for doing it.
bq.Unless there is a noticeable benefit, is there a reason to try and defeat the GC? Short
lived objects are very very cheap with newer jvm's and adding complexity that would be disabled
for people running a newer jvm seems weird.
This is something I saw in Elliot's comment in another JIRA too. Actually I think we should
investigate here. I am not a G1GC expert so I think we can get some help here and really see
what is the impact of these short lived objects with the new GC and its tuning. It makes sense
if we can leave things to JVM if it is not burdened in doing this housekeeping. 


> [RpcServer] reuse request read buffer
> -------------------------------------
>
>                 Key: HBASE-14490
>                 URL: https://issues.apache.org/jira/browse/HBASE-14490
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC
>    Affects Versions: 2.0.0, 1.0.2
>            Reporter: Zephyr Guo
>            Assignee: Zephyr Guo
>              Labels: performance
>             Fix For: 2.0.0, 1.0.2
>
>         Attachments: 14490.hack.to.1.2.patch, ByteBufferPool.java, HBASE-14490-v1.patch,
HBASE-14490-v10.patch, HBASE-14490-v11.patch, HBASE-14490-v12.patch, HBASE-14490-v2.patch,
HBASE-14490-v3.patch, HBASE-14490-v4.patch, HBASE-14490-v5.patch, HBASE-14490-v6.patch, HBASE-14490-v7.patch,
HBASE-14490-v8.patch, HBASE-14490-v9.patch, gc.png, hits.png, test-v12-patch
>
>
> Reusing buffer to read request.It's not necessary to every request free buffer.The idea
of optimization is to reduce the times that allocate ByteBuffer.
> *Modification*
> 1. {{saslReadAndProcess}} ,{{processOneRpc}}, {{processUnwrappedData}}, {{processConnectionHeader}}
accept a ByteBuffer instead of byte[].They can move {{ByteBuffer.position}} correctly when
we have read the data.
> 2. {{processUnwrappedData}} no longer use any extra memory.
> 3. Maintaining a buffer pool in each {{Connection}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message