Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5F0B496B4 for ; Mon, 9 Apr 2012 14:29:21 +0000 (UTC) Received: (qmail 6058 invoked by uid 500); 9 Apr 2012 14:29:17 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 6013 invoked by uid 500); 9 Apr 2012 14:29:17 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 5995 invoked by uid 99); 9 Apr 2012 14:29:17 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Apr 2012 14:29:17 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of magnito@gmail.com designates 209.85.161.169 as permitted sender) Received: from [209.85.161.169] (HELO mail-gx0-f169.google.com) (209.85.161.169) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Apr 2012 14:29:10 +0000 Received: by ggeq1 with SMTP id q1so2362457gge.14 for ; Mon, 09 Apr 2012 07:28:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=wYfDCyEmwlQklu3md5+REkrDV+jE0AHsNecyunN06/w=; b=T0mYVtCwJUruLDtfEyeXz5NLDnawlDU9iFSmb+p2XsiZds6JYU8J6ew+umEj53CLUI dgxKfAWI+P4KfpWUaNKsF6HPgK9c9N/uKa2AUKCrBaT17XjUUYCmaa9pUjWtttoHuHX2 Yhaan2dQWDEqWQbhaIth8Jldiq+CHXTiO8GJTBQe7tTo8EHjg/RtYPEEYzY/d3GXafng UWd7bFfyMUnfepOd3+5hSgVkoH7mL/TGheOqI1McZXbAaAXaLCf5oC4rPYxcmta0mdEx 6Mihl52RX70nDadLwyQPG4iwRiKyYpGf+VCrYuvGOImAVM4XaUKLnmfXsojHf+c5OO6o NMJw== MIME-Version: 1.0 Received: by 10.236.79.195 with SMTP id i43mr6149746yhe.53.1333981729451; Mon, 09 Apr 2012 07:28:49 -0700 (PDT) Received: by 10.236.108.41 with HTTP; Mon, 9 Apr 2012 07:28:49 -0700 (PDT) In-Reply-To: <33654666.post@talk.nabble.com> References: <33635226.post@talk.nabble.com> <4c3efe36-75ab-4ee3-a23f-6e33a103216b@email.android.com> <33654666.post@talk.nabble.com> Date: Mon, 9 Apr 2012 07:28:49 -0700 Message-ID: Subject: Re: Speeding up HBase read response From: Jack Levin To: user@hbase.apache.org Cc: hbase-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Yes, from %util you can see that your disks are working at 100% pretty much. Which means you can't push them go any faster. So the solution is to add more disks, add faster disks, add nodes and disks. This type of overload should not be related to HBASE, but rather to your hardware setup. -Jack On Mon, Apr 9, 2012 at 2:29 AM, ijanitran wrote: > > Hi, results of iostat are pretty much very similar on all nodes: > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0294.00 =A0 =A00.00 =A0= =A0 9.27 =A0 =A0 0.00 =A0 =A064.54 > 21.97 =A0 75.44 =A0 3.40 100.10 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 4.00 =A0286.00 =A0 =A08.00 =A0= =A0 9.11 =A0 =A0 0.27 =A0 =A065.33 > 7.16 25.32 2.88 =A084.70 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0283.00 =A0 =A00.00 =A0= =A0 8.29 =A0 =A0 0.00 =A0 =A059.99 > 10.31 =A0 35.43 =A0 2.97 =A084.10 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0320.00 =A0 =A00.00 =A0= =A0 9.12 =A0 =A0 0.00 =A0 =A058.38 > 12.32 =A0 39.56 =A0 2.79 =A089.40 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0336.63 =A0 =A00.00 =A0= =A0 9.18 =A0 =A0 0.00 =A0 =A055.84 > 10.67 =A0 31.42 =A0 2.78 =A093.47 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0312.00 =A0 =A00.00 =A0= =A010.00 =A0 =A0 0.00 =A0 =A065.62 > 11.07 =A0 35.49 =A0 2.91 =A090.70 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0356.00 =A0 =A00.00 =A0= =A010.72 =A0 =A0 0.00 =A0 =A061.66 > 9.38 26.63 2.57 =A091.40 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0258.00 =A0 =A00.00 =A0= =A0 8.20 =A0 =A0 0.00 =A0 =A065.05 > 13.37 =A0 51.24 =A0 3.64 =A093.90 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0246.00 =A0 =A00.00 =A0= =A0 7.31 =A0 =A0 0.00 =A0 =A060.88 > 5.87 =A0 24.53 =A0 3.14 =A077.30 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 2.00 =A0297.00 =A0 =A03.00 =A0= =A0 9.11 =A0 =A0 0.02 =A0 =A062.29 > 13.02 =A0 42.40 =A0 3.12 =A093.60 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 0.00 =A0292.00 =A0 =A00.00 =A0= =A0 9.60 =A0 =A0 0.00 =A0 =A067.32 > 11.30 =A0 39.51 =A0 3.36 =A098.00 > > Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0 =A0= rMB/s =A0 =A0wMB/s avgrq-sz > avgqu-sz =A0 await =A0svctm =A0%util > xvdap1 =A0 =A0 =A0 =A0 =A0 =A00.00 =A0 =A0 4.00 =A0261.00 =A0 =A08.00 =A0= =A0 7.84 =A0 =A0 0.27 =A0 =A061.74 > 16.07 =A0 55.72 =A0 3.39 =A091.30 > > > Jack Levin wrote: >> >> Please email iostat -xdm 1, run for one minute during load on each node >> -- >> Sent from my Android phone with K-9 Mail. Please excuse my brevity. >> >> ijanitran wrote: >> >> >> I have 4 nodes HBase v0.90.4-cdh3u3 cluster deployed on Amazon XLarge >> instances (16Gb RAM, 4 cores CPU) with 8Gb heap -Xmx allocated for HRegi= on >> servers, 2Gb for datanodes. HMaster\ZK\Namenode is on the separate XLarg= e >> instance. Target dataset is 100 millions records (each record is 10 fiel= ds >> by 100 bytes). Benchmarking performed concurrently from parallel 100 >> threads. >> >> I'm confused with a read latency I got, comparing to what YCSB team >> achieved >> and showed in their YCSB paper. They achieved throughput of up to 7000 >> ops/sec with a latency of 15 ms (page 10, read latency chart). I can't g= et >> throughput higher than 2000 ops/sec on 90% reads/10% writes workload. >> Writes >> are really fast with auto commit disabled (response within a few ms), >> while >> read latency doesn't go lower than 70 ms in average. >> >> These are some HBase settings I used: >> >> hbase.regionserver.handler.count=3D50 >> hfile.block.cache.size=3D0.4 >> hbase.hregion.max.filesize=3D1073741824 >> hbase.regionserver.codecs=3Dlzo >> hbase.hregion.memstore.mslab.enabled=3Dtrue >> hfile.min.blocksize.size=3D16384 >> hbase.hregion.memstore.block.multiplier=3D4 >> hbase.regionserver.global.memstore.upperLimit=3D0.35 >> hbase.zookeeper.property.maxClientCnxns=3D100 >> >> Which settings do you recommend to look at\tune to speed up reads with >> HBase? >> >> -- >> View this message in context: >> http://old.nabble.com/Speeding-up-HBase-read-response-tp33635226p3363522= 6.html >> Sent from the HBase User mailing list archive at Nabble.com. >> >> >> > > -- > View this message in context: http://old.nabble.com/Speeding-up-HBase-rea= d-response-tp33635226p33654666.html > Sent from the HBase User mailing list archive at Nabble.com. >