hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14479) Apply the Leader/Followers pattern to RpcServer's Reader
Date Mon, 12 Oct 2015 20:50:06 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14953700#comment-14953700

stack commented on HBASE-14479:

A few comments:

+ There is only one Reader thread so how can there be leaders and followers?
+ If only one Reader thread, could we discard and let the Listener thread do the dispatch?
+ Patch could do with a few comments including link to pattern being implemented. For example,
what is going on here:

+          SelectionKey key = selectedKeyQueue.poll();
+          if (key != null) {
+            processing(key);
+            continue;
+          }

We are the leader and we keep processing the queue till no more keys... then we fall through
to do similar relinquishing the lock if no more to do?

Patch does some nice cleanup. Just trying to understand it better. Thanks [~ikeda]

> Apply the Leader/Followers pattern to RpcServer's Reader
> --------------------------------------------------------
>                 Key: HBASE-14479
>                 URL: https://issues.apache.org/jira/browse/HBASE-14479
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC, Performance
>            Reporter: Hiroshi Ikeda
>            Assignee: Hiroshi Ikeda
>            Priority: Minor
>         Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, HBASE-14479-V2.patch,
HBASE-14479.patch, gc.png, gets.png, io.png, median.png
> {{RpcServer}} uses multiple selectors to read data for load distribution, but the distribution
is just done by round-robin. It is uncertain, especially for long run, whether load is equally
divided and resources are used without being wasted.
> Moreover, multiple selectors may cause excessive context switches which give priority
to low latency (while we just add the requests to queues), and it is possible to reduce throughput
of the whole server.

This message was sent by Atlassian JIRA

View raw message