hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-12790) Support fairness across parallelized scans
Date Mon, 05 Oct 2015 23:57:27 GMT

    [ https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944277#comment-14944277
] 

Andrew Purtell commented on HBASE-12790:
----------------------------------------

bq.  Instead of waiting for some 20 secs for one point query now we will be able to execute
around 10 queries each with 2 secs time. Let me see how I can present the reports.

Thanks Ram.

Did you mean: instead of waiting 20 seconds for one count query now we will see several point
queries completing during that interval? 

In addition to how many of the different query types can complete during the test interval,
when testing the mixed point and count load you're putting on the system I wonder how does
the distribution of completion times for point queries change? Should see clear improvement
when the count query is running with the patch applied. (smile) Shouldn't see perf impact
when not, or if we do we can see the magnitude of it and decide if its acceptable. 

> Support fairness across parallelized scans
> ------------------------------------------
>
>                 Key: HBASE-12790
>                 URL: https://issues.apache.org/jira/browse/HBASE-12790
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: James Taylor
>            Assignee: ramkrishna.s.vasudevan
>              Labels: Phoenix
>         Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, HBASE-12790_1.patch,
HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, HBASE-12790_trunk_1.patch
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in getting back
results. This can lead to starvation with a loaded cluster and interleaved scans, since the
RPC queue will be ordered and processed on a FIFO basis. For example, if there are two clients,
A & B that submit largish scans at the same time. Say each scan is broken down into 100
scans by the client (broken down into equal depth chunks along the row key), and the 100 scans
of client A are queued first, followed immediately by the 100 scans of client B. In this case,
client B will be starved out of getting any results back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead of the standard
FIFO queue. The queue to be used could be (maybe it already is) configurable based on a new
config parameter. Using this queue would require the client to have the same identifier for
all of the 100 parallel scans that represent a single logical scan from the clients point
of view. With this information, the round robin queue would pick off a task from the queue
in a round robin fashion (instead of a strictly FIFO manner) to prevent starvation over interleaved
parallelized scans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message