hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14479) Apply the Leader/Followers pattern to RpcServer's Reader
Date Fri, 08 Jul 2016 17:48:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15368055#comment-15368055
] 

stack commented on HBASE-14479:
-------------------------------

bq. I found that the method Reader.doRead(SelectionKey) just does one request for each call,
regardless of whether the next request is available...

How do you mean [~ikeda]? The doRunLoop will doRead for each key gotten on a select.

bq. BTW, in order to resolve this, when we read as many requests from a connection as possible,
the queue will easily become full and it will be difficult to handle requests fairly as to
connections. I think it is better to cap the count of the requests simultaneously executing
for each connection, according to the current requests queued (instead of using a fixed bounded
queue).

Sounds good. I can test any experiments you might want to try.

Thanks.

> Apply the Leader/Followers pattern to RpcServer's Reader
> --------------------------------------------------------
>
>                 Key: HBASE-14479
>                 URL: https://issues.apache.org/jira/browse/HBASE-14479
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC, Performance
>            Reporter: Hiroshi Ikeda
>            Assignee: Hiroshi Ikeda
>            Priority: Minor
>         Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, HBASE-14479-V2.patch,
HBASE-14479.patch, flamegraph-19152.svg, flamegraph-32667.svg, gc.png, gets.png, io.png, median.png
>
>
> {{RpcServer}} uses multiple selectors to read data for load distribution, but the distribution
is just done by round-robin. It is uncertain, especially for long run, whether load is equally
divided and resources are used without being wasted.
> Moreover, multiple selectors may cause excessive context switches which give priority
to low latency (while we just add the requests to queues), and it is possible to reduce throughput
of the whole server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message