Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 71EAD200C68 for ; Tue, 18 Apr 2017 10:00:58 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 7076F160B90; Tue, 18 Apr 2017 08:00:58 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id BF8DD160BAC for ; Tue, 18 Apr 2017 10:00:57 +0200 (CEST) Received: (qmail 6419 invoked by uid 500); 18 Apr 2017 08:00:56 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 6397 invoked by uid 99); 18 Apr 2017 08:00:56 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Apr 2017 08:00:56 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 09BE5C18F5 for ; Tue, 18 Apr 2017 08:00:56 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 7MRwkDeL0zsZ for ; Tue, 18 Apr 2017 08:00:55 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 4D48D5F405 for ; Tue, 18 Apr 2017 08:00:53 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 8544AE0BDD for ; Tue, 18 Apr 2017 08:00:52 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id F097421B5A for ; Tue, 18 Apr 2017 08:00:51 +0000 (UTC) Date: Tue, 18 Apr 2017 08:00:51 +0000 (UTC) From: "xinxin fan (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HBASE-17932) add manageable call queue MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 18 Apr 2017 08:00:58 -0000 [ https://issues.apache.org/jira/browse/HBASE-17932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972288#comment-15972288 ] xinxin fan edited comment on HBASE-17932 at 4/18/17 8:00 AM: ------------------------------------------------------------- @Allan Yang , your idea is right. The throughputs of both clients always arise(drops) at the same time, it implies that the troughs in Figure1 really result of GCs. And i test the case1( test big updates only) and case2(use 150 handlers for client1 and 3 handler for client2) you have mentioned, both results exist troughs. As for HBase 2.0, i will test the case below: 1. test big updates only and observe the active handler number 2. first run small updates , then run big updates. observe the throughput of the small updates we haven't develop the feature of 'manageable call queue' , so I can not do a A/B tests. In addition, i will do a A/B test in another scenario: both client-1 and client-2 send get requests, the client-1 search all data from lots of hfiles( requestdistribution=uniform), the system io would be very busy. the client-2 do not require any I/O resources(say, data is cached). The blockcache will be bucketcache with offheap mode. A test: both clients send requests to a single queue, observe the throughput of both clients B test: client-1 sends requests to queue1 and client-2 sends requests to queue2, observe the throughput of both client was (Author: xinxin fan): @Allan Yang , your idea is right. The throughputs of both clients always arise(drops) at the same time, it implies that the troughs in Figure1 really result of GCs. And i test the case1( test big updates only) and case2(use 150 handlers for client1 and 3 handler for client2) you have mentioned, both results exist troughs. As for HBase 2.0, i will test the case below: 1. test big updates only and observe the active handler number 2. first run small updates , then run big updates. observe the throughput of the small updates we haven't develop the feature of 'manageable call queue' , so it's impossible to do a A/B tests. In addition, i will do a A/B test in another scenario: both client-1 and client-2 send get requests, the client-1 search all data from lots of hfiles( requestdistribution=uniform), the system io would be very busy. the client-2 do not require any I/O resources(say, data is cached). The blockcache will be bucketcache with offheap mode. A test: both clients send requests to a single queue, observe the throughput of both clients B test: client-1 sends requests to queue1 and client-2 sends requests to queue2, observe the throughput of both client > add manageable call queue > ------------------------- > > Key: HBASE-17932 > URL: https://issues.apache.org/jira/browse/HBASE-17932 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC > Reporter: xinxin fan > Fix For: 1.1.2 > > Attachments: ManageableCallQueue.pdf > > > The Manageable Call Queue aims to provide a unified way to build: > 1. Administor can create queue with specified number handler using shell command > 2. Administor can increase/decrease the handler number for the specified queue using shell command > 3. Administor can assign the same type request load to one queue, for example, administor can assign all the large object write requests to one queue, while assign all the small object write requests to another queue. > the feature allow you to change the ipc queues/handlers at runtime and is a improvements for running multiple workloads on a single hbase cluster in some cases. -- This message was sent by Atlassian JIRA (v6.3.15#6346)