hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-11355) a couple of callQueue related improvements
Date Wed, 25 Jun 2014 23:38:25 GMT

    [ https://issues.apache.org/jira/browse/HBASE-11355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044174#comment-14044174
] 

stack commented on HBASE-11355:
-------------------------------

[~xieliang007] Pilot error.

I see more than 50% more throughput when pure random read from cache if I apply patch and
set the below config:

<property>
  <name>ipc.server.num.callqueue</name>
  <value>10</value>
</property>

My handler count is default for master: i.e. 30.

Can we enable this by default?  Add fat release note and also in hbase-default.xml tie this
new config. and handler count at least in the description?



> a couple of callQueue related improvements
> ------------------------------------------
>
>                 Key: HBASE-11355
>                 URL: https://issues.apache.org/jira/browse/HBASE-11355
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC, Performance
>    Affects Versions: 0.99.0, 0.94.20
>            Reporter: Liang Xie
>            Assignee: Matteo Bertozzi
>         Attachments: HBASE-11355-v0.patch
>
>
> In one of my in-memory read only testing(100% get requests), one of the top scalibility
bottleneck came from the single callQueue. A tentative sharing this callQueue according to
the rpc handler number showed a big throughput improvement(the original get() qps is around
60k, after this one and other hotspot tunning, i got 220k get() qps in the same single region
server) in a YCSB read only scenario.
> Another stuff we can do is seperating the queue into read call queue and write call queue,
we had done it in our internal branch, it would helpful in some outages, to avoid all read
or all write requests ran out of all handler threads.
> One more stuff is changing the current blocking behevior once the callQueue is full,
considering the full callQueue almost means the backend processing is slow somehow, so a fail-fast
here should be more reasonable if we using HBase as a low latency processing system. see "callQueue.put(call)"



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message