hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-12790) Support fairness across parallelized scans
Date Tue, 10 Nov 2015 19:01:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999119#comment-14999119
] 

Andrew Purtell commented on HBASE-12790:
----------------------------------------

bq. Hopefully there's enough information to feed into requirements from a user perspective:
round robin across parallelized operations and MR jobs running on the same cluster to prevent
one job from locking out others.

There may be an implicit "in 0.98" here too. Let's remove that, if so, because:

bq. (In 1.1 and up) Scanners can return after a certain size and/or time threshold has been
crossed

Step 1: Have both Phoenix scanners and those MR jobs set these parameters to constrain the
amount of time each scanner.next call will run. Let's double check that we can set the defaults
we want in site configuration. 

Step 2: On the server, as a generic and transparent improvement, have the scheduler round-robin
requests between connections.

With both of these in place, we get:
- No one client can starve other clients. That means Phoenix work is interleaved with MR work
on the server side. This is what you want.
- Within your own single connection, no unit of work will exceed time X. This doesn't give
you everything you want "within the connection" but you can work with this, because the server
will give you ~deterministic performance per op.

Now you can take the queue of local work - you own this, this is client side, HBase server
side doesn't (and can't) know about internal client priorities - and make sure if you have
internal notions of "this is for query A" and "that is for query B" that you interleave calls
to scanner.next for A and B. It's more work than naively blasting ops at the servers and expecting
the server side to handle differentiated QoS "within the connection", but this is the step
too far the community doesn't want (yet). Leave this out and we might arrive at agreement.

> Support fairness across parallelized scans
> ------------------------------------------
>
>                 Key: HBASE-12790
>                 URL: https://issues.apache.org/jira/browse/HBASE-12790
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: James Taylor
>            Assignee: ramkrishna.s.vasudevan
>              Labels: Phoenix
>         Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, HBASE-12790_1.patch,
HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in getting back
results. This can lead to starvation with a loaded cluster and interleaved scans, since the
RPC queue will be ordered and processed on a FIFO basis. For example, if there are two clients,
A & B that submit largish scans at the same time. Say each scan is broken down into 100
scans by the client (broken down into equal depth chunks along the row key), and the 100 scans
of client A are queued first, followed immediately by the 100 scans of client B. In this case,
client B will be starved out of getting any results back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead of the standard
FIFO queue. The queue to be used could be (maybe it already is) configurable based on a new
config parameter. Using this queue would require the client to have the same identifier for
all of the 100 parallel scans that represent a single logical scan from the clients point
of view. With this information, the round robin queue would pick off a task from the queue
in a round robin fashion (instead of a strictly FIFO manner) to prevent starvation over interleaved
parallelized scans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message