Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B7982105F4 for ; Sat, 25 Jan 2014 02:45:03 +0000 (UTC) Received: (qmail 16982 invoked by uid 500); 25 Jan 2014 02:45:00 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 16610 invoked by uid 500); 25 Jan 2014 02:44:58 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 16602 invoked by uid 99); 25 Jan 2014 02:44:58 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 25 Jan 2014 02:44:57 +0000 X-ASF-Spam-Status: No, hits=-0.1 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_MED,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of bbeaudreault@hubspot.com designates 74.125.149.141 as permitted sender) Received: from [74.125.149.141] (HELO na3sys009aog128.obsmtp.com) (74.125.149.141) by apache.org (qpsmtpd/0.29) with SMTP; Sat, 25 Jan 2014 02:44:51 +0000 Received: from mail-wg0-f52.google.com ([74.125.82.52]) (using TLSv1) by na3sys009aob128.postini.com ([74.125.148.12]) with SMTP ID DSNKUuMlDUHL7r8fTw8s5y5eYQLk3Y6s1aYi@postini.com; Fri, 24 Jan 2014 18:44:30 PST Received: by mail-wg0-f52.google.com with SMTP id b13so3671160wgh.31 for ; Fri, 24 Jan 2014 18:44:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=KWj2VZKgYr7lLePwValqs7l+5uchUDXXM/UJQ24kSjY=; b=LxQzceYg9KJWhV/iw227OcIzD4ia6EAKdRowQPMhALAyujyACZe+C4ZMfSH4caijXE qA5Ry2MRfvPlltzaUJk5cq12yGFKHJRSrNTcIQUHKPL8iCyc8BSLvB8Airjh4GqkjU8A NG8Ve2BYWmW3gzHfcDkvkQ5cRLm/pizXsOnxAZL6oLOzSIGtx6sdEPCDBhRr6z0rO6Fi c/ryRdun+r2xMwKW4GIt9TciFrDmgTkiMD39Q7EuIAQEcg9+b501sQ3rSQIJuOhGx3ev q40h3XcCOwi5yHx+0u95TlJvo9kE77rQ7/YHe+16Gs2/oVUI7Vm/IB8xUMXevvYeRf/Z 7UCg== X-Gm-Message-State: ALoCoQmdVkOcfbFrRZHXYWY+E2sHc9lR63trPKjdKFYOwFwwsQjKKfCWw87baonoVQnB2RKhCWvC99F0bm/EVyjFGPooLDyhXTySnm2+AblIkca3kNj0nK5a+W90xvnPvqQvwiiKMnjh0EAtJWdUkWqCv9EPlYyAaj2x2Y/Op2ck8YmmmTzS5EA= X-Received: by 10.180.205.239 with SMTP id lj15mr4208830wic.22.1390617868160; Fri, 24 Jan 2014 18:44:28 -0800 (PST) X-Received: by 10.180.205.239 with SMTP id lj15mr4208829wic.22.1390617868052; Fri, 24 Jan 2014 18:44:28 -0800 (PST) MIME-Version: 1.0 Received: by 10.227.105.4 with HTTP; Fri, 24 Jan 2014 18:44:07 -0800 (PST) In-Reply-To: References: From: Bryan Beaudreault Date: Fri, 24 Jan 2014 21:44:07 -0500 Message-ID: Subject: Re: Hbase tuning for heavy write cluster To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=001a11c38d960b419f04f0c27464 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c38d960b419f04f0c27464 Content-Type: text/plain; charset=ISO-8859-1 It seems from your ingestion rate you are still blowing through HFiles too fast. You're going to want to up the MEMSTORE_FLUSHSIZE for the table from the default of 128MB. If opentsdb is the only thing on this cluster, you can do the math pretty easily to find the maximum allowable, based on your heap size and accounting for 40% (default) used for the block cache. On Fri, Jan 24, 2014 at 9:38 PM, Rohit Dev wrote: > Hi Kevin, > > We have about 160 regions per server with 16Gig region size and 10 > drives for Hbase. I've looked at disk IO and that doesn't seem to be > any problem ( % utilization is < 2 across all disk) > > Any suggestion what heap size I should allocation, normally I allocate > 16GB. > > Also, I read increasing hbase.hstore.blockingStoreFiles and > hbase.hregion.memstore.block.multiplier is good idea for write-heavy > cluster, but in my case it seem to be heading to wrong direction. > > Thanks > > On Fri, Jan 24, 2014 at 6:31 PM, Kevin O'dell > wrote: > > Rohit, > > > > 64GB heap is not ideal, you will run into some weird issues. How many > > regions are you running per server, how many drives in each node, any > other > > settings you changed from default? > > On Jan 24, 2014 6:22 PM, "Rohit Dev" wrote: > > > >> Hi, > >> > >> We are running Opentsdb on CDH 4.3 hbase cluster, with most of the > >> default settings. The cluster is heavy on write and I'm trying to see > >> what parameters I can tune to optimize the write performance. > >> > >> > >> # I get messages related to Memstore[1] and Slow Response[2] very > >> often, is this an indication of any issue ? > >> > >> I tried increasing some parameters on one node: > >> - hbase.hstore.blockingStoreFiles - from default 7 to 15 > >> - hbase.hregion.memstore.block.multiplier - from default 2 to 8 > >> - and heap size from 16GB to 64GB > >> > >> * 'Compaction queue' went up to ~200 within 60 mins after restarting > >> region server with new parameters and the log started to get even more > >> noisy. > >> > >> Can anyone please suggest if I'm going to right direction with these > >> new settings ? or if there are other thing that I could monitor or > >> change to make it better. > >> > >> Thank you! > >> > >> > >> [1] > >> INFO org.apache.hadoop.hbase.regionserver.HRegion: Blocking updates > >> for 'IPC Server handler 19 on 60020' on region > >> > >> > tsdb,\x008XR\xE0i\x90\x00\x00\x02Q\x7F\x1D\x00\x00(\x00\x0B]\x00\x008M(r\x00\x00Bl\xA7\x8C,1390556781703.0771bf90cab25c503d3400206417f6bf.: > >> memstore size 256.3 M is >= than blocking 256 M size > >> > >> [2] > >> WARN org.apache.hadoop.ipc.HBaseServer: (responseTooSlow): > >> > >> > {"processingtimems":17887,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@586940ea > >> ), > >> rpc version=1, client version=29, > >> methodsFingerPrint=0","client":"192.168.10.10:54132 > >> > ","starttimems":1390587959182,"queuetimems":1498,"class":"HRegionServer","responsesize":0,"method":"multi"} > >> > --001a11c38d960b419f04f0c27464--