hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "xinxin fan (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HBASE-17932) add manageable call queue
Date Tue, 18 Apr 2017 08:01:41 GMT

    [ https://issues.apache.org/jira/browse/HBASE-17932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972288#comment-15972288
] 

xinxin fan edited comment on HBASE-17932 at 4/18/17 8:00 AM:
-------------------------------------------------------------

@Allan Yang , your idea is right. The throughputs of both clients always arise(drops) at the
same time, it implies that the troughs in Figure1 really result of GCs. And i test the case1(
test big updates only) and case2(use 150 handlers for client1 and 3 handler for client2) you
have mentioned, both results exist troughs. 

As for HBase 2.0, i will test the case below:
1. test big updates only and observe the active handler number
2. first run small updates , then run big updates. observe the throughput of the small updates

However, we haven't develop the feature of 'manageable call queue' in HBase2.0 yet , so I
can not do a A/B tests. 

In addition, i will do a A/B test in another scenario: both client-1 and client-2 send get
requests, the client-1 search all data from lots of hfiles( requestdistribution=uniform),
the system io would be very busy. the client-2 do not require any I/O resources(say, data
is cached). The blockcache will be bucketcache with offheap mode.
A test:
both clients send requests to a single queue, observe the throughput of both clients
B test:
client-1 sends requests to queue1 and client-2 sends requests to queue2, observe the throughput
of both client





was (Author: xinxin fan):
@Allan Yang , your idea is right. The throughputs of both clients always arise(drops) at the
same time, it implies that the troughs in Figure1 really result of GCs. And i test the case1(
test big updates only) and case2(use 150 handlers for client1 and 3 handler for client2) you
have mentioned, both results exist troughs. 

As for HBase 2.0, i will test the case below:
1. test big updates only and observe the active handler number
2. first run small updates , then run big updates. observe the throughput of the small updates

we haven't develop the feature of 'manageable call queue' , so I can not do a A/B tests. 

In addition, i will do a A/B test in another scenario: both client-1 and client-2 send get
requests, the client-1 search all data from lots of hfiles( requestdistribution=uniform),
the system io would be very busy. the client-2 do not require any I/O resources(say, data
is cached). The blockcache will be bucketcache with offheap mode.
A test:
both clients send requests to a single queue, observe the throughput of both clients
B test:
client-1 sends requests to queue1 and client-2 sends requests to queue2, observe the throughput
of both client




> add manageable call queue
> -------------------------
>
>                 Key: HBASE-17932
>                 URL: https://issues.apache.org/jira/browse/HBASE-17932
>             Project: HBase
>          Issue Type: Improvement
>          Components: IPC/RPC
>            Reporter: xinxin fan
>             Fix For: 1.1.2
>
>         Attachments: ManageableCallQueue.pdf
>
>
> The Manageable Call Queue aims to provide a unified way to build:
> 1.	Administor can create queue with specified number handler using shell command 
> 2.	Administor can increase/decrease the handler number for the specified queue using
shell command
> 3.	Administor can assign the same type request load to one queue, for example, administor
can assign all the large object write requests to one queue, while assign all the small object
write requests to another queue.
> the feature allow you to change the ipc queues/handlers at runtime and is a improvements
for running multiple workloads on a single hbase cluster in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message