hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-12790) Support fairness across parallelized scans
Date Sat, 07 Nov 2015 07:10:10 GMT

    [ https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14995080#comment-14995080
] 

stack commented on HBASE-12790:
-------------------------------

bq. Therefore dispatching work queued per connection in a round robin manner would satisfy
the problem as stated there.

Thank you for the summary and intercession [~andrew.purtell@gmail.com]

bq. That won't work because an HConnection is shared by all the clients on the same JVM.

Not so [~giacomotaylor], not since hbase 1.0. But I can see where you are coming from; previous
to 1.0, connection handling was voodoo. The connection handling is for the client to manage
now.

If a client wants to run a particular configuration (priority, etc.), I suggest that it open
a new connection and set attributes appropriately and away you go. It will be easier on the
server to sort the incoming loading/scheduling on a Connection-basis rather than on a per-request-and-then-on-group
basis. Would this work for phoenix mighty James?

> Support fairness across parallelized scans
> ------------------------------------------
>
>                 Key: HBASE-12790
>                 URL: https://issues.apache.org/jira/browse/HBASE-12790
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: James Taylor
>            Assignee: ramkrishna.s.vasudevan
>              Labels: Phoenix
>         Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, HBASE-12790_1.patch,
HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in getting back
results. This can lead to starvation with a loaded cluster and interleaved scans, since the
RPC queue will be ordered and processed on a FIFO basis. For example, if there are two clients,
A & B that submit largish scans at the same time. Say each scan is broken down into 100
scans by the client (broken down into equal depth chunks along the row key), and the 100 scans
of client A are queued first, followed immediately by the 100 scans of client B. In this case,
client B will be starved out of getting any results back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead of the standard
FIFO queue. The queue to be used could be (maybe it already is) configurable based on a new
config parameter. Using this queue would require the client to have the same identifier for
all of the 100 parallel scans that represent a single logical scan from the clients point
of view. With this information, the round robin queue would pick off a task from the queue
in a round robin fashion (instead of a strictly FIFO manner) to prevent starvation over interleaved
parallelized scans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message