Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 95632D430 for ; Sat, 13 Oct 2012 15:59:08 +0000 (UTC) Received: (qmail 69534 invoked by uid 500); 13 Oct 2012 15:59:06 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 69381 invoked by uid 500); 13 Oct 2012 15:59:06 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 69373 invoked by uid 99); 13 Oct 2012 15:59:06 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 13 Oct 2012 15:59:06 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jbishop.rwc@gmail.com designates 209.85.210.169 as permitted sender) Received: from [209.85.210.169] (HELO mail-ia0-f169.google.com) (209.85.210.169) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 13 Oct 2012 15:59:00 +0000 Received: by mail-ia0-f169.google.com with SMTP id h37so3343173iak.14 for ; Sat, 13 Oct 2012 08:58:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=51Vp+M2DaCK8/sEqQXPgwQ6BIv813JXT6dC1Ge6Kq60=; b=efAVrOLybJClFNLN8PJrgWAU/OOEHzymSm+oPWFbffOka2D9RQqddtef85OmqrW8Kk 9V7f6y5dwQVmX7R2kd376mZqQ7IZ38rhSxihk7Ey2EVVq/K0W3Twvf+kXwC+NxrSJLKf 9kEz6DSX7WFOL70oOd00HRDhm+qvLLQ9f0aoxFAO62n91X66tlZGIDAJniymFJXtVPrw 1QjLrtDwr1YpXjqFfPYWpEAwQOTALllrEgXtWM/YX7NetYhxG+f5gUbMH6fefQzpT7pF /J8ZmP/kcNQVxRFXr7y93CUQ4eqSnbAU+8/1igqz2wKC8zEyoZ6wu1mje7B5m2PfiDsi KCJw== MIME-Version: 1.0 Received: by 10.50.214.66 with SMTP id ny2mr4883410igc.21.1350143918713; Sat, 13 Oct 2012 08:58:38 -0700 (PDT) Received: by 10.64.36.35 with HTTP; Sat, 13 Oct 2012 08:58:38 -0700 (PDT) In-Reply-To: References: <6204964974022779715@unknownmsgid> <51BE77A7231E9E488E58380AFBD77272105DB964@mail1.impetus.co.in> Date: Sat, 13 Oct 2012 08:58:38 -0700 Message-ID: Subject: Re: more regionservers does not improve performance From: Jonathan Bishop To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=14dae9341199abaa1b04cbf2e014 X-Virus-Checked: Checked by ClamAV on apache.org --14dae9341199abaa1b04cbf2e014 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Suraj, I bumped my regionservers all the way up to 32g from 8g. They are running on 64g and 128g machines on our cluster. Unfortunately, the machines all have various states of loading (usually high) from other users. In ganglia I do not see any swapping, but that has been known to happen from time to time. Thanks for your help - I'll take a look at your links. Jon On Fri, Oct 12, 2012 at 7:30 PM, Suraj Varma wrote: > Hi Jonathan: > What specific metric on ganglia did you notice for "IO is spiking"? Is > it your disk IO? Is your disk swapping? Do you see cpu iowait spikes? > > I see you have given 8g to the RegionServer ... how much RAM is > available total on that node? What heap are the individual mappers & > DN set to run on (i.e. check whether you are overallocated on heap > when the _mappers_ run ... causing disk swapping ... leading to IO?). > > There can be multiple causes ... so, you may need to look at ganglia > stats and narrow the bottleneck down as described in > http://hbase.apache.org/book/casestudies.perftroub.html > > Here's a good reference for all the memstore related tweaks you can > try (and also to understand what each configuration means): > http://blog.sematext.com/2012/07/16/hbase-memstore-what-you-should-know/ > > Also, provide more details on your schema (CFs, row size), Put sizes, > etc as well to see if that triggers an idea from the list. > --S > > > On Fri, Oct 12, 2012 at 12:46 PM, Bryan Beaudreault > wrote: > > I recommend turning on debug logging on your region servers. You may > need > > to tune down certain packages back to info, because there are a few > spammy > > ones, but overall it helps. > > > > You should see messages such as "12/10/09 14:22:57 INFO > > regionserver.HRegion: Blocking updates for 'IPC Server handler 41 on > 60020' > > on region XXX: memstore size 256.0m is >=3D than blocking 256.0m size".= As > > you can see, this is an INFO anyway so you should be able to see it now > if > > it is happening. > > > > You can try upping the number of IPC handlers and the memstore flush > > threshold. Also, maybe you are bottlenecked by the WAL. Try doing > > put.setWriteToWAL(false), just to see if it increases performance. If = so > > and you want to be a bit more safe with regard to the wal, you can try > > turning on deferred flush on your table. I don't really know how to > > increase performance of the wal aside from that, if this does seem to > have > > an affect. > > > > > > > > On Fri, Oct 12, 2012 at 3:15 PM, Jonathan Bishop >wrote: > > > >> Kevin, > >> > >> Sorry, I am fairly new to HBase. Can you be specific about what > settings I > >> can change, and also where they are specified? > >> > >> Pretty sure I am not hotspotting, and increasing memstore does not see= m > to > >> have any effect. > >> > >> I do not seen any messages in my regionserver logs concerning blocking= . > >> > >> I am suspecting that I am hitting some limit in our grid, but would > like to > >> know where that limit is being imposed. > >> > >> Jon > >> > >> On Fri, Oct 12, 2012 at 6:44 AM, Kevin O'dell >> >wrote: > >> > >> > Jonathan, > >> > > >> > Lets take a deeper look here. > >> > > >> > What is your memstore set at for the table/CF in question? Lets > compare > >> > that value with the flush size you are seeing for your regions. If > they > >> > are really small flushes is it all to the same region? If so that i= s > >> going > >> > to be schema issues. If they are full flushes you can up your > memstore > >> > assuming you have the heap to cover it. If they are smaller flushes > but > >> to > >> > different regions you most likely are suffering from global limit > >> pressure > >> > and flushing too soon. > >> > > >> > Are you flushing prematurely due to HLogs rolling? Take a look for > too > >> > many hlogs and look at the flushes. It may benefit you to raise tha= t > >> > value. > >> > > >> > Are you blocking? As Suraj was saying you may be blocking in 90seco= nd > >> > blocks. Check the RS logs for those messages as well and then Suraj= 's > >> > advice. > >> > > >> > This is where I would start to optimize your write path. I hope the > >> above > >> > helps. > >> > > >> > On Fri, Oct 12, 2012 at 3:34 AM, Suraj Varma > >> wrote: > >> > > >> > > What have you configured your hbase.hstore.blockingStoreFiles and > >> > > hbase.hregion.memstore.block.multiplier? Both of these block updat= es > >> > > when the limit is hit. Try increasing these to say 20 and 4 from t= he > >> > > default 7 and 2 and see if it helps. > >> > > > >> > > If this still doesn't help, see if you can set up ganglia to get a > >> > > better insight into what is bottlenecking. > >> > > --Suraj > >> > > > >> > > > >> > > > >> > > On Thu, Oct 11, 2012 at 11:47 PM, Pankaj Misra > >> > > wrote: > >> > > > OK, Looks like I missed out reading that part in your original > mail. > >> > Did > >> > > you try some of the compaction tweaks and configurations as > explained > >> in > >> > > the following link for your data? > >> > > > http://hbase.apache.org/book/regions.arch.html#compaction > >> > > > > >> > > > > >> > > > Also, how much data are your putting into the regions, and how > big is > >> > > one region at the end of data ingestion? > >> > > > > >> > > > Thanks and Regards > >> > > > Pankaj Misra > >> > > > > >> > > > -----Original Message----- > >> > > > From: Jonathan Bishop [mailto:jbishop.rwc@gmail.com] > >> > > > Sent: Friday, October 12, 2012 12:04 PM > >> > > > To: user@hbase.apache.org > >> > > > Subject: RE: more regionservers does not improve performance > >> > > > > >> > > > Pankaj, > >> > > > > >> > > > Thanks for the reply. > >> > > > > >> > > > Actually, I am using MD5 hashing to evenly spread the keys among > the > >> > > splits, so I don=92t believe there is any hotspot. In fact, when I > >> monitory > >> > > the web UI for HBase I see a very even load on all the > regionservers. > >> > > > > >> > > > Jon > >> > > > > >> > > > Sent from my Windows 8 PC < > >> > http://windows.microsoft.com/consumer-preview > >> > > > > >> > > > > >> > > > *From:* Pankaj Misra > >> > > > *Sent:* Thursday, October 11, 2012 8:24:32 PM > >> > > > *To:* user@hbase.apache.org > >> > > > *Subject:* RE: more regionservers does not improve performance > >> > > > > >> > > > Hi Jonathan, > >> > > > > >> > > > What seems to me is that, while doing the split across all 40 > >> mappers, > >> > > the keys are not randomized enough to leverage multiple regions an= d > the > >> > > pre-split strategy. This may be happening because all the 40 mappe= rs > >> may > >> > be > >> > > trying to write onto a single region for sometime, making it a HOT > >> > region, > >> > > till the key falls into another region, and then the other region > >> > becomes > >> > > a HOT region hence you may seeing a high impact of compaction cycl= es > >> > > reducing your throughput. > >> > > > > >> > > > Are the keys incremental? Are the keys randomized enough across > the > >> > > splits? > >> > > > > >> > > > Ideally when all 40 mappers are running you should see all the > >> regions > >> > > being filled up in parallel for maximum throughput. Hope it helps. > >> > > > > >> > > > Thanks and Regards > >> > > > Pankaj Misra > >> > > > > >> > > > > >> > > > ________________________________________ > >> > > > From: Jonathan Bishop [jbishop.rwc@gmail.com] > >> > > > Sent: Friday, October 12, 2012 5:38 AM > >> > > > To: user@hbase.apache.org > >> > > > Subject: more regionservers does not improve performance > >> > > > > >> > > > Hi, > >> > > > > >> > > > I am running a MR job with 40 simultaneous mappers, each of whic= h > >> does > >> > > puts to HBase. I have ganged up the puts into groups of 1000 (this > >> seems > >> > to > >> > > help quite a bit) and also made sure that the table is pre-split > into > >> 100 > >> > > regions, and that the row keys are randomized using MD5 hashing. > >> > > > > >> > > > My cluster size is 10, and I am allowing 4 mappers per > tasktracker. > >> > > > > >> > > > In my MR job I know that the mappers are able to generate puts > much > >> > > faster than the puts can be handled in hbase. In other words if I > let > >> the > >> > > mappers run without doing hbase puts then everything scales as you > >> would > >> > > expect with the number of mappers created. It is the hbase puts > which > >> > seem > >> > > to be the bottleneck. > >> > > > > >> > > > What is strange is that I do not get much run time improvement b= y > >> > > increasing the number regionservers beyond about 4. Indeed, it see= ms > >> that > >> > > the system runs slower with 8 regionservers than with 4. > >> > > > > >> > > > I have added the following in hbase-env.sh hoping this would > help... > >> > > (from the book HBase in Action) > >> > > > > >> > > > export HBASE_OPTS=3D"-Xmx8g" > >> > > > export HBASE_REGIONSERVER_OPTS=3D"-Xmx8g -Xms8g -Xmn128m > >> -XX:+UseParNewGC > >> > > -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=3D70" > >> > > > > >> > > > # Uncomment below to enable java garbage collection logging in t= he > >> .out > >> > > file. > >> > > > export HBASE_OPTS=3D"${HBASE_OPTS} -verbose:gc -XX:+PrintGCDetai= ls > >> > > -XX:+PrintGCDateStamps -Xloggc:${HBASE_HOME}/logs/gc-hbase.log" > >> > > > > >> > > > Monitoring hbase through the web ui I see that there are pauses > for > >> > > flushing, which seems to run pretty quickly, and for compacting, > which > >> > > seems to take somewhat longer. > >> > > > > >> > > > Any advice for making this run faster would be greatly > appreciated. > >> > > > Currently I am looking into installing Ganglia to better monitor= y > my > >> > > cluster, but yet to have that running. > >> > > > > >> > > > I suspect an I/O issue as the regionservers do not seem terribly > >> > loaded. > >> > > > > >> > > > Thanks, > >> > > > > >> > > > Jon > >> > > > > >> > > > ________________________________ > >> > > > > >> > > > Impetus Ranked in the Top 50 India=92s Best Companies to Work Fo= r > 2012. > >> > > > > >> > > > Impetus webcast =91Designing a Test Automation Framework for > >> Multi-vendor > >> > > Interoperable Systems=92 available at http://lf1.me/0E/. > >> > > > > >> > > > > >> > > > NOTE: This message may contain information that is confidential, > >> > > proprietary, privileged or otherwise protected by law. The message > is > >> > > intended solely for the named addressee. If received in error, > please > >> > > destroy and notify the sender. Any use of this email is prohibited > when > >> > > received in error. Impetus does not represent, warrant and/or > >> guarantee, > >> > > that the integrity of this communication has been maintained nor > that > >> the > >> > > communication is free of errors, virus, interception or > interference. > >> > > > > >> > > > ________________________________ > >> > > > > >> > > > Impetus Ranked in the Top 50 India=92s Best Companies to Work Fo= r > 2012. > >> > > > > >> > > > Impetus webcast =91Designing a Test Automation Framework for > >> Multi-vendor > >> > > Interoperable Systems=92 available at http://lf1.me/0E/. > >> > > > > >> > > > > >> > > > NOTE: This message may contain information that is confidential, > >> > > proprietary, privileged or otherwise protected by law. The message > is > >> > > intended solely for the named addressee. If received in error, > please > >> > > destroy and notify the sender. Any use of this email is prohibited > when > >> > > received in error. Impetus does not represent, warrant and/or > >> guarantee, > >> > > that the integrity of this communication has been maintained nor > that > >> the > >> > > communication is free of errors, virus, interception or > interference. > >> > > > >> > > >> > > >> > > >> > -- > >> > Kevin O'Dell > >> > Customer Operations Engineer, Cloudera > >> > > >> > --14dae9341199abaa1b04cbf2e014--