Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8A585D196 for ; Fri, 12 Oct 2012 19:47:34 +0000 (UTC) Received: (qmail 55224 invoked by uid 500); 12 Oct 2012 19:47:32 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 55164 invoked by uid 500); 12 Oct 2012 19:47:32 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 55149 invoked by uid 99); 12 Oct 2012 19:47:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Oct 2012 19:47:32 +0000 X-ASF-Spam-Status: No, hits=-0.1 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of bbeaudreault@hubspot.com designates 74.125.149.73 as permitted sender) Received: from [74.125.149.73] (HELO na3sys009aog104.obsmtp.com) (74.125.149.73) by apache.org (qpsmtpd/0.29) with SMTP; Fri, 12 Oct 2012 19:47:25 +0000 Received: from mail-la0-f41.google.com ([209.85.215.41]) (using TLSv1) by na3sys009aob104.postini.com ([74.125.148.12]) with SMTP ID DSNKUHhztqB08q0NZp09ILZtnKrxVjYPS09G@postini.com; Fri, 12 Oct 2012 12:47:03 PDT Received: by mail-la0-f41.google.com with SMTP id p5so2612963lag.14 for ; Fri, 12 Oct 2012 12:47:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=c3OhES3Q7ZS9iTe4ar0Ozx3z58efJynj+k5pIxcmzNw=; b=CYkw5xYnmeXZEvm0qYZJx0ktkcyyTGbrezhu8/+pa6J5uz2xHpbtLSIgq8AsHulIfL NddG0ph2ISpR9KW6lsLEUonpOBltxHUDr1io8id8hPSranFzjRR8LhjmL7StZR8AZdnY LrhyZsjY8USPzvOJ4ZqB5IGTCUi82cVJXTaKC2VobYo1JIiHW/mWFQuQvFM7KFuGKZsK /1h2W89g0uXMRQooQuIGtqNaYdtV6pOEvHj88R8KxH5BAMIP6/aw+WbglIBvcumfddGJ v1/GCkXcoeiAZ1O3ZYSXdL/1ACHWnFx5EyVeNe0rUvzwzHIPR9lkeZS3EXlqT34ZwCVy E2KA== Received: by 10.112.103.166 with SMTP id fx6mr2034250lbb.4.1350071220606; Fri, 12 Oct 2012 12:47:00 -0700 (PDT) MIME-Version: 1.0 Received: by 10.114.68.67 with HTTP; Fri, 12 Oct 2012 12:46:40 -0700 (PDT) In-Reply-To: References: <6204964974022779715@unknownmsgid> <51BE77A7231E9E488E58380AFBD77272105DB964@mail1.impetus.co.in> From: Bryan Beaudreault Date: Fri, 12 Oct 2012 15:46:40 -0400 Message-ID: Subject: Re: more regionservers does not improve performance To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=f46d0401687586980e04cbe1f3d4 X-Gm-Message-State: ALoCoQmc+emALhhwCFZEDf0QpnOZf/0maG6Q0yPRqPBZc271XLeN2OmvZ7xqwVLqWDw/LGGAtQgz X-Virus-Checked: Checked by ClamAV on apache.org --f46d0401687586980e04cbe1f3d4 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable I recommend turning on debug logging on your region servers. You may need to tune down certain packages back to info, because there are a few spammy ones, but overall it helps. You should see messages such as "12/10/09 14:22:57 INFO regionserver.HRegion: Blocking updates for 'IPC Server handler 41 on 60020' on region XXX: memstore size 256.0m is >=3D than blocking 256.0m size". As you can see, this is an INFO anyway so you should be able to see it now if it is happening. You can try upping the number of IPC handlers and the memstore flush threshold. Also, maybe you are bottlenecked by the WAL. Try doing put.setWriteToWAL(false), just to see if it increases performance. If so and you want to be a bit more safe with regard to the wal, you can try turning on deferred flush on your table. I don't really know how to increase performance of the wal aside from that, if this does seem to have an affect. On Fri, Oct 12, 2012 at 3:15 PM, Jonathan Bishop wro= te: > Kevin, > > Sorry, I am fairly new to HBase. Can you be specific about what settings = I > can change, and also where they are specified? > > Pretty sure I am not hotspotting, and increasing memstore does not seem t= o > have any effect. > > I do not seen any messages in my regionserver logs concerning blocking. > > I am suspecting that I am hitting some limit in our grid, but would like = to > know where that limit is being imposed. > > Jon > > On Fri, Oct 12, 2012 at 6:44 AM, Kevin O'dell >wrote: > > > Jonathan, > > > > Lets take a deeper look here. > > > > What is your memstore set at for the table/CF in question? Lets compar= e > > that value with the flush size you are seeing for your regions. If the= y > > are really small flushes is it all to the same region? If so that is > going > > to be schema issues. If they are full flushes you can up your memstore > > assuming you have the heap to cover it. If they are smaller flushes bu= t > to > > different regions you most likely are suffering from global limit > pressure > > and flushing too soon. > > > > Are you flushing prematurely due to HLogs rolling? Take a look for too > > many hlogs and look at the flushes. It may benefit you to raise that > > value. > > > > Are you blocking? As Suraj was saying you may be blocking in 90second > > blocks. Check the RS logs for those messages as well and then Suraj's > > advice. > > > > This is where I would start to optimize your write path. I hope the > above > > helps. > > > > On Fri, Oct 12, 2012 at 3:34 AM, Suraj Varma > wrote: > > > > > What have you configured your hbase.hstore.blockingStoreFiles and > > > hbase.hregion.memstore.block.multiplier? Both of these block updates > > > when the limit is hit. Try increasing these to say 20 and 4 from the > > > default 7 and 2 and see if it helps. > > > > > > If this still doesn't help, see if you can set up ganglia to get a > > > better insight into what is bottlenecking. > > > --Suraj > > > > > > > > > > > > On Thu, Oct 11, 2012 at 11:47 PM, Pankaj Misra > > > wrote: > > > > OK, Looks like I missed out reading that part in your original mail= . > > Did > > > you try some of the compaction tweaks and configurations as explained > in > > > the following link for your data? > > > > http://hbase.apache.org/book/regions.arch.html#compaction > > > > > > > > > > > > Also, how much data are your putting into the regions, and how big = is > > > one region at the end of data ingestion? > > > > > > > > Thanks and Regards > > > > Pankaj Misra > > > > > > > > -----Original Message----- > > > > From: Jonathan Bishop [mailto:jbishop.rwc@gmail.com] > > > > Sent: Friday, October 12, 2012 12:04 PM > > > > To: user@hbase.apache.org > > > > Subject: RE: more regionservers does not improve performance > > > > > > > > Pankaj, > > > > > > > > Thanks for the reply. > > > > > > > > Actually, I am using MD5 hashing to evenly spread the keys among th= e > > > splits, so I don=92t believe there is any hotspot. In fact, when I > monitory > > > the web UI for HBase I see a very even load on all the regionservers. > > > > > > > > Jon > > > > > > > > Sent from my Windows 8 PC < > > http://windows.microsoft.com/consumer-preview > > > > > > > > > > > > *From:* Pankaj Misra > > > > *Sent:* Thursday, October 11, 2012 8:24:32 PM > > > > *To:* user@hbase.apache.org > > > > *Subject:* RE: more regionservers does not improve performance > > > > > > > > Hi Jonathan, > > > > > > > > What seems to me is that, while doing the split across all 40 > mappers, > > > the keys are not randomized enough to leverage multiple regions and t= he > > > pre-split strategy. This may be happening because all the 40 mappers > may > > be > > > trying to write onto a single region for sometime, making it a HOT > > region, > > > till the key falls into another region, and then the other region > > becomes > > > a HOT region hence you may seeing a high impact of compaction cycles > > > reducing your throughput. > > > > > > > > Are the keys incremental? Are the keys randomized enough across the > > > splits? > > > > > > > > Ideally when all 40 mappers are running you should see all the > regions > > > being filled up in parallel for maximum throughput. Hope it helps. > > > > > > > > Thanks and Regards > > > > Pankaj Misra > > > > > > > > > > > > ________________________________________ > > > > From: Jonathan Bishop [jbishop.rwc@gmail.com] > > > > Sent: Friday, October 12, 2012 5:38 AM > > > > To: user@hbase.apache.org > > > > Subject: more regionservers does not improve performance > > > > > > > > Hi, > > > > > > > > I am running a MR job with 40 simultaneous mappers, each of which > does > > > puts to HBase. I have ganged up the puts into groups of 1000 (this > seems > > to > > > help quite a bit) and also made sure that the table is pre-split into > 100 > > > regions, and that the row keys are randomized using MD5 hashing. > > > > > > > > My cluster size is 10, and I am allowing 4 mappers per tasktracker. > > > > > > > > In my MR job I know that the mappers are able to generate puts much > > > faster than the puts can be handled in hbase. In other words if I let > the > > > mappers run without doing hbase puts then everything scales as you > would > > > expect with the number of mappers created. It is the hbase puts which > > seem > > > to be the bottleneck. > > > > > > > > What is strange is that I do not get much run time improvement by > > > increasing the number regionservers beyond about 4. Indeed, it seems > that > > > the system runs slower with 8 regionservers than with 4. > > > > > > > > I have added the following in hbase-env.sh hoping this would help..= . > > > (from the book HBase in Action) > > > > > > > > export HBASE_OPTS=3D"-Xmx8g" > > > > export HBASE_REGIONSERVER_OPTS=3D"-Xmx8g -Xms8g -Xmn128m > -XX:+UseParNewGC > > > -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=3D70" > > > > > > > > # Uncomment below to enable java garbage collection logging in the > .out > > > file. > > > > export HBASE_OPTS=3D"${HBASE_OPTS} -verbose:gc -XX:+PrintGCDetails > > > -XX:+PrintGCDateStamps -Xloggc:${HBASE_HOME}/logs/gc-hbase.log" > > > > > > > > Monitoring hbase through the web ui I see that there are pauses for > > > flushing, which seems to run pretty quickly, and for compacting, whic= h > > > seems to take somewhat longer. > > > > > > > > Any advice for making this run faster would be greatly appreciated. > > > > Currently I am looking into installing Ganglia to better monitory m= y > > > cluster, but yet to have that running. > > > > > > > > I suspect an I/O issue as the regionservers do not seem terribly > > loaded. > > > > > > > > Thanks, > > > > > > > > Jon > > > > > > > > ________________________________ > > > > > > > > Impetus Ranked in the Top 50 India=92s Best Companies to Work For 2= 012. > > > > > > > > Impetus webcast =91Designing a Test Automation Framework for > Multi-vendor > > > Interoperable Systems=92 available at http://lf1.me/0E/. > > > > > > > > > > > > NOTE: This message may contain information that is confidential, > > > proprietary, privileged or otherwise protected by law. The message is > > > intended solely for the named addressee. If received in error, please > > > destroy and notify the sender. Any use of this email is prohibited wh= en > > > received in error. Impetus does not represent, warrant and/or > guarantee, > > > that the integrity of this communication has been maintained nor that > the > > > communication is free of errors, virus, interception or interference. > > > > > > > > ________________________________ > > > > > > > > Impetus Ranked in the Top 50 India=92s Best Companies to Work For 2= 012. > > > > > > > > Impetus webcast =91Designing a Test Automation Framework for > Multi-vendor > > > Interoperable Systems=92 available at http://lf1.me/0E/. > > > > > > > > > > > > NOTE: This message may contain information that is confidential, > > > proprietary, privileged or otherwise protected by law. The message is > > > intended solely for the named addressee. If received in error, please > > > destroy and notify the sender. Any use of this email is prohibited wh= en > > > received in error. Impetus does not represent, warrant and/or > guarantee, > > > that the integrity of this communication has been maintained nor that > the > > > communication is free of errors, virus, interception or interference. > > > > > > > > > > > -- > > Kevin O'Dell > > Customer Operations Engineer, Cloudera > > > --f46d0401687586980e04cbe1f3d4--