Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4BF2DD15D for ; Wed, 21 Nov 2012 18:55:25 +0000 (UTC) Received: (qmail 7285 invoked by uid 500); 21 Nov 2012 18:55:23 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 7064 invoked by uid 500); 21 Nov 2012 18:55:23 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Delivered-To: moderator for user@hbase.apache.org Received: (qmail 61016 invoked by uid 99); 21 Nov 2012 18:37:49 -0000 X-ASF-Spam-Status: No, hits=1.3 required=5.0 tests=RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of vincent.barat@gmail.com designates 209.85.214.41 as permitted sender) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=fMkxM1P59VKJBC8i8YWf3kzD+oRL8y5lYDGnGqgw4cY=; b=s+3noqQky7xc8VI9gAb7SXSPZrOF/KvuQk3MYzidjULSKfADT4X7fKJlox1790Y+aE btX0Nu2D5rVtunfCk6IEyXNiXjnm8vMSMsR4rsf/ZrhOwZpPU4lqzl1gli1R0psdnTIN 8UmdiVihCJuE+KeSpn+AM82UEPHF6Nuh6v8j8hJrOKXA4v1WlM4EqtEw90tjbjPkYI0v RTtVy39I4M2U9pHrphmbysJcDv2KoiuWNSmqOQxXb6Fji+BtyyuVW6tkJ8RdVw39JzVp QW27r0UbxEABL0ncGguXfzy1EDuU5L6nBhjEV9Xk/36Zu73cu/cCiArW39kpZ8LKP1wG TOmA== Message-ID: <50AD1F5C.7070503@gmail.com> Date: Wed, 21 Nov 2012 19:37:16 +0100 From: Vincent Barat User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: user@hbase.apache.org Subject: Re: HBase Tuning References: <50756F61.8000606@di.uminho.pt> <5077F774.4090006@di.uminho.pt> <50ABD1FA.1070705@gmail.com> In-Reply-To: <50ABD1FA.1070705@gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Checked: Checked by ClamAV on apache.org Forget about this: it does not help Le 20/11/12 19:54, Vincent Barat a �crit : > Hi, > > It seems there is a potential contention in the HBase client code > (a useless synchronized method) > You may try to use this patch : > https://issues.apache.org/jira/browse/HBASE-7069 > > I face similar issues on my production cluster since I upgraded to > HBase 0.92. I will test this patch tomorrow... > More info matter. > > Cheers > > Le 12/10/12 12:56, Ricardo Vila�a a �crit : >> Hi, >> >> Em 11/10/12 04:24, Stack escreveu: >>> On Wed, Oct 10, 2012 at 5:51 AM, Ricardo Vila�a >>> wrote: >>>> However, when adding an additional client node, with also 400 >>>> clients, >>>> the latency increases 3 times, >>>> but the RegionServers remains idle more than 80%. I had tried >>>> different >>>> values for the hbase.regionserver.handler.count and also >>>> for the hbase.client.ipc.pool size and type but without any >>>> improvement. >>>> >>> I was going to suggest that it sounded like all handlers are >>> occupied... but it sounds like you tried upping them. >> Yes, had already tried to increase to 200 but without improvement >> on the application latency. However, the output of the active IPC >> handlers, using the Web interface, >> is strange. For region servers I can see in a given instant at >> most 4 >> IPC handler active but if I >> see the state of all other IPC handlers they are waiting for 0 >> seconds. >> In the master the IPC handlers are also almost all in the waiting >> state >> but for a few seconds. >>> Is this going from one client node (serving 400 clients) to two >>> client >>> nodes (serving 800 clients)? >> Yes, the huge increase in latency is when going for one client >> node to >> two client nodes. However, increasing the number of clients in a >> single >> node also adds to latency but a small increase. >>> Where are you measuring from? Application side? Can you figure >>> if we >>> are binding up in HBase or in the client node? >> This measures are from the application side. As the huge >> increase in >> latency >> is happening when increasing the number of clients I suspect that >> the >> binding up is in the >> HBase maybe due to some incorrect configuration. >> >>> What does a client node look like? It is something hosting an >>> hbase >>> client? A webserver or something? >> Yes, the client node is hosting an HBase client. >>>> Is there any configuration parameter that can improve the >>>> latency with >>>> several concurrent threads and more than one HBase client node >>>> and/or which JMX parameters should I monitor on RegionServers >>>> to check >>>> what may be causing this and how could I achieve better >>>> utilization of CPU >>>> at RegionServers? >>>> >>> It sounds like all your data is memory resident given its size >>> and the >>> lack of iowait. Is that so? Studying the regionserver metrics, >>> are >>> they fairly constant across the addition of the new client node? >> Yes, all data is memory resident. As far as I can see, the >> regionserver >> metrics are >> fairly constant. >> >> Thanks, >> >