Return-Path: Delivered-To: apmail-hbase-user-archive@www.apache.org Received: (qmail 38658 invoked from network); 4 Jan 2011 19:37:19 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 4 Jan 2011 19:37:19 -0000 Received: (qmail 35698 invoked by uid 500); 4 Jan 2011 19:37:18 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 35655 invoked by uid 500); 4 Jan 2011 19:37:18 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 35647 invoked by uid 99); 4 Jan 2011 19:37:18 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jan 2011 19:37:18 +0000 X-ASF-Spam-Status: No, hits=1.5 required=10.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mcorgan@hotpads.com designates 209.85.214.193 as permitted sender) Received: from [209.85.214.193] (HELO mail-iw0-f193.google.com) (209.85.214.193) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Jan 2011 19:37:13 +0000 Received: by iwn2 with SMTP id 2so6417244iwn.8 for ; Tue, 04 Jan 2011 11:36:51 -0800 (PST) MIME-Version: 1.0 Received: by 10.42.170.137 with SMTP id f9mr2095072icz.468.1294169811579; Tue, 04 Jan 2011 11:36:51 -0800 (PST) Received: by 10.42.80.129 with HTTP; Tue, 4 Jan 2011 11:36:51 -0800 (PST) In-Reply-To: References: <672338.79124.qm@web130102.mail.mud.yahoo.com> Date: Tue, 4 Jan 2011 14:36:51 -0500 Message-ID: Subject: Re: HBase / HDFS on EBS? From: Matt Corgan To: user Content-Type: multipart/alternative; boundary=90e6ba6e8a24e5f3f204990a6359 X-Virus-Checked: Checked by ClamAV on apache.org --90e6ba6e8a24e5f3f204990a6359 Content-Type: text/plain; charset=ISO-8859-1 One nice thing is that you can create many small EBS volumes per instance, and since each EBS volume does ~100 IOPS you can get really good aggregate random read performance. On Tue, Jan 4, 2011 at 2:05 PM, Phil Whelan wrote: > Hi Otis, > > I have used Hadoop on EBS, but not HBase yet (apologies for not being HBase > specific). > > * Supposedly ephemeral disks can be faster, but EC2 claims EBS is faster. > > People who benchmarked EBS mention its performance varies a lot. Local > > disks > > suffer from noisy neighbour problem, no? > > > > EBS Volumes are much faster than the local EC2 image's local disk, in my > experience. > > > > * EBS disks are not local. They are far from the CPU. What happens with > > data > > locality if you have data on EBS? > > > > Amazon uses local *fibre* network to connect EBS to the machine, so that is > not much of a problem. > > > > * MR jobs typically read and write a lot. I wonder if this ends up being > > very > > expensive? > > > > Costs do tend to creep up on AWS. On the plus side, you can roughly > calculate how expensive you MR jobs will be. Using your own hardware is > definitely more cost effective. > > > > * Data on ephemeral disks is lost when an instance terminates. Do people > > really > > rely purely on having N DNs and high enough replication factor to prevent > > data > > loss? > > > > I found local EC2 image disks far slower than EBS, so stopped using them. I > do not recall losing more than one EBS volume, but I've lost a many EC2 > instances (and the local disk with it). Now I always choose EBS-backed EC2 > instances. > > * With EBS you could just create a larger volume when you need more disk > > space > > and attach it to your existing DN. If you are running out of disk space > on > > local disks, what are the options? Got to launch more EC2 instances even > > if all > > you need is disk space, not more CPUs? > > > > Yes, you cannot increase the local disk space on EC2 instance without > getting a larger instance. As I understand, it is good for Hadoop to have > one disk per cpu core for MR. > > Thanks, > Phil > > -- > Twitter : http://www.twitter.com/philwhln > LinkedIn : http://ca.linkedin.com/in/philwhln > Blog : http://www.philwhln.com > > > > Thanks, > > Otis > > ---- > > Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch > > Lucene ecosystem search :: http://search-lucene.com/ > > > > > --90e6ba6e8a24e5f3f204990a6359--