Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0EB1310CC9 for ; Tue, 8 Oct 2013 05:58:51 +0000 (UTC) Received: (qmail 92283 invoked by uid 500); 8 Oct 2013 05:56:19 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 92193 invoked by uid 500); 8 Oct 2013 05:55:54 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 92144 invoked by uid 99); 8 Oct 2013 05:55:45 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Oct 2013 05:55:45 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ramu.malur@gmail.com designates 209.85.220.48 as permitted sender) Received: from [209.85.220.48] (HELO mail-pa0-f48.google.com) (209.85.220.48) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Oct 2013 05:55:38 +0000 Received: by mail-pa0-f48.google.com with SMTP id bj1so8198579pad.7 for ; Mon, 07 Oct 2013 22:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=YmHWf8dXrGl3HSadFduM1qGAXHAh5Uj8F8ynvDKRH2I=; b=J81eXRmfY4VDvGWEHqLWvLRtI0X0u/mwgzFUMMs+XtOj8Y6zHeN1fBjYqL77D3IP4/ tuSwpIo80cM08rFx8G+Gw/r07eTzhswh+mOaAJJq2bF4XsDDMqkJ2Zyp3yKI7fOpyitO EUAM73sGIxt0C9Z6vAAA6xSvP7/rwq0yG07D6T2biw40RHxjuO+Fanl+GaeonubA1T5w zTFu+VKSey2Gq7TBRcySAJQMiLSAGyDtzrwGf7F0XENFit5Kv8BL3xmGwriTyfxzDbvm kuJxlftPkN/pJ+fjNiHm+rBMVU9XVq5vWHLh7TcZH2gdmh25TEP+YrAbvqsRDscKBh42 9DWw== MIME-Version: 1.0 X-Received: by 10.66.188.80 with SMTP id fy16mr1494598pac.168.1381211716503; Mon, 07 Oct 2013 22:55:16 -0700 (PDT) Received: by 10.70.103.41 with HTTP; Mon, 7 Oct 2013 22:55:16 -0700 (PDT) In-Reply-To: References: <1381123300.88874.YahooMailNeo@web140602.mail.bf1.yahoo.com> <1381126892.93964.YahooMailNeo@web140603.mail.bf1.yahoo.com> Date: Tue, 8 Oct 2013 14:55:16 +0900 Message-ID: Subject: Re: HBase Random Read latency > 100ms From: Ramu M S To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=047d7bdc967eb8cbe004e8346967 X-Virus-Checked: Checked by ClamAV on apache.org --047d7bdc967eb8cbe004e8346967 Content-Type: text/plain; charset=ISO-8859-1 Hi All, Average Latency is still around 80ms. I have done the following, 1. Enabled Snappy Compression 2. Reduce the HFile size to 8 GB Should I attribute these results to bad Disk Configuration OR anything else to investigate? - Ramu On Tue, Oct 8, 2013 at 10:56 AM, Ramu M S wrote: > Vladimir, > > Thanks for the Insights into Future Caching features. Looks very > interesting. > > - Ramu > > > On Tue, Oct 8, 2013 at 10:45 AM, Vladimir Rodionov < > vrodionov@carrieriq.com> wrote: > >> Ramu, >> >> If your working set of data fits into 192GB you may get additional boost >> by utilizing OS page cache, or wait until >> 0.98 release which introduces new bucket cache implementation (port of >> Facebook L2 cache). You can try vanilla bucket cache in 0.96 (not released >> yet >> but is due soon). Both caches stores data off-heap, but Facebook version >> can store encoded and compressed data and vanilla bucket cache does not. >> There are some options how to utilize efficiently available RAM (at least >> in upcoming HBase releases) >> . If your data set does not fit RAM then your only hope is your 24 SAS >> drives. Depending on your RAID settings, disk IO perf, HDFS configuration >> (I think the latest Hadoop is preferable here). >> >> OS page cache is most vulnerable and volatile, it can not be controlled >> and can be easily polluted by either some other processes or by HBase >> itself (long scan). >> With Block cache you have more control but the first truly usable >> *official* implementation is going to be a part of 0.98 release. >> >> As far as I understand, your use case would definitely covered by >> something similar to BigTable ScanCache (RowCache) , but there is no such >> cache in HBase yet. >> One major advantage of RowCache vs BlockCache (apart from being much more >> efficient in RAM usage) is resilience to Region compactions. Each minor >> Region compaction invalidates partially >> Region's data in BlockCache and major compaction invalidates this >> Region's data completely. This is not the case with RowCache (would it be >> implemented). >> >> Best regards, >> Vladimir Rodionov >> Principal Platform Engineer >> Carrier IQ, www.carrieriq.com >> e-mail: vrodionov@carrieriq.com >> >> ________________________________________ >> From: Ramu M S [ramu.malur@gmail.com] >> Sent: Monday, October 07, 2013 5:25 PM >> To: user@hbase.apache.org >> Subject: Re: HBase Random Read latency > 100ms >> >> Vladimir, >> >> Yes. I am fully aware of the HDD limitation and wrong configurations wrt >> RAID. >> Unfortunately, the hardware is leased from others for this work and I >> wasn't consulted to decide the h/w specification for the tests that I am >> doing now. Even the RAID cannot be turned off or set to RAID-0 >> >> Production system is according to the Hadoop needs (100 Nodes with 16 Core >> CPU, 192 GB RAM, 24 X 600GB SAS Drives, RAID cannot be completely turned >> off, so we are creating 1 Virtual Disk containing only 1 Physical Disk and >> the VD RAID level set to* *RAID-0). These systems are still not >> available. If >> you have any suggestion on the production setup, I will be glad to hear. >> >> Also, as pointed out earlier, we are planning to use HBase also as an in >> memory KV store to access the latest data. >> That's why RAM was considered huge in this configuration. But looks like >> we >> would run into more problems than any gains from this. >> >> Keeping that aside, I was trying to get the maximum out of the current >> cluster or as you said Is 500-1000 OPS the max I could get out of this >> setup? >> >> Regards, >> Ramu >> >> >> >> Confidentiality Notice: The information contained in this message, >> including any attachments hereto, may be confidential and is intended to be >> read only by the individual or entity to whom this message is addressed. If >> the reader of this message is not the intended recipient or an agent or >> designee of the intended recipient, please note that any review, use, >> disclosure or distribution of this message or its attachments, in any form, >> is strictly prohibited. If you have received this message in error, please >> immediately notify the sender and/or Notifications@carrieriq.com and >> delete or destroy any copy of this message and its attachments. >> > > --047d7bdc967eb8cbe004e8346967--