hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14479) Apply the Leader/Followers pattern to RpcServer's Reader
Date Thu, 16 Jun 2016 05:24:05 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333124#comment-15333124

stack commented on HBASE-14479:

Just putting a placeholder here:

Our rpcscheduler is configurable. Default is FIFO. If we do the request on the Reader thread
-- not handing off to the Handler -- then we go much faster. Over in https://issues.apache.org/jira/browse/HBASE-15967?focusedCommentId=15317950&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15317950,
[~ikeda] suggests doing all requests irrespective of priority on Reader until we get close
to limit. Then we switch to queuing and respecting priority.

Meantime, there is the FB experience which the lads have codified in AdaptiveLifoCoDelCallQueue
where we FIFO until we become loaded and then we go LIFO with a controlled delay that has
us shedding load rather than become swamped.

Default should be a conflation of the two notions above. TODO. The FB lads are going to come
back w/ some more input running AdaptiveLifoCoDelCallQueue. That'll help.

> Apply the Leader/Followers pattern to RpcServer's Reader
> --------------------------------------------------------
>                 Key: HBASE-14479
>                 URL: https://issues.apache.org/jira/browse/HBASE-14479
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC, Performance
>            Reporter: Hiroshi Ikeda
>            Assignee: Hiroshi Ikeda
>            Priority: Minor
>         Attachments: HBASE-14479-V2 (1).patch, HBASE-14479-V2.patch, HBASE-14479-V2.patch,
HBASE-14479.patch, flamegraph-19152.svg, flamegraph-32667.svg, gc.png, gets.png, io.png, median.png
> {{RpcServer}} uses multiple selectors to read data for load distribution, but the distribution
is just done by round-robin. It is uncertain, especially for long run, whether load is equally
divided and resources are used without being wasted.
> Moreover, multiple selectors may cause excessive context switches which give priority
to low latency (while we just add the requests to queues), and it is possible to reduce throughput
of the whole server.

This message was sent by Atlassian JIRA

View raw message