Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 35171 invoked from network); 13 May 2010 15:09:33 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 13 May 2010 15:09:33 -0000 Received: (qmail 56390 invoked by uid 500); 13 May 2010 15:09:32 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 56350 invoked by uid 500); 13 May 2010 15:09:32 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 56342 invoked by uid 99); 13 May 2010 15:09:32 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 May 2010 15:09:32 +0000 X-ASF-Spam-Status: No, hits=0.7 required=10.0 tests=AWL,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of edlinuxguru@gmail.com designates 209.85.212.176 as permitted sender) Received: from [209.85.212.176] (HELO mail-px0-f176.google.com) (209.85.212.176) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 May 2010 15:09:27 +0000 Received: by pxi10 with SMTP id 10so902294pxi.35 for ; Thu, 13 May 2010 08:09:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=N7ND3v9IipaGWkdYFHHCsWMXdZwV3v2jdoX3K1e+6n4=; b=BSKLYr4epTUjntzjpch5RyAfsVEyEoqilXZpT86N9Tw67BUd5IRS1nzdH5wP1miw9S 5Pwd2C6rWpouFrGxkkqvpsYUDSYxkMYGcZl9pYXFyVjd6G/1SeeB5Vqjy3IMVVjtdasz 0djQTaF1gzgeEKO2KwSzp+foX66iw8134g1+g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=ud6c+A/UyTh7QqWGCQObCnDoseoCkJAmCHjcgc2v1XBoGQ1a5/XEugUxUSsFJhsdQd a3A+dd1o7fYN28a1gonPEzzjJF+BmSda8c7oXundcdVUBdX+b8spAn6w1KoEomXJcvoF wX6C0BEBQ5oMgqn78CZwIx6Ynig+oBC6Uj7Uo= MIME-Version: 1.0 Received: by 10.142.10.40 with SMTP id 40mr6535026wfj.26.1273763346929; Thu, 13 May 2010 08:09:06 -0700 (PDT) Received: by 10.142.80.7 with HTTP; Thu, 13 May 2010 08:09:06 -0700 (PDT) In-Reply-To: References: <655468.78099.qm@web65514.mail.ac4.yahoo.com> Date: Thu, 13 May 2010 11:09:06 -0400 Message-ID: Subject: Re: Using HBase on other file systems From: Edward Capriolo To: hbase-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=00504502ba2ad2595904867b23e5 --00504502ba2ad2595904867b23e5 Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 13, 2010 at 12:26 AM, Jeff Hammerbacher wrote: > Some projects sacrifice stability and manageability for performance (see, > e.g., http://gluster.org/pipermail/gluster-users/2009-October/003193.html > ). > > On Wed, May 12, 2010 at 11:15 AM, Edward Capriolo >wrote: > > > On Wed, May 12, 2010 at 1:30 PM, Andrew Purtell > > wrote: > > > > > Before recommending Gluster I suggest you set up a test cluster and > then > > > randomly kill bricks. > > > > > > Also as pointed out in another mail, you'll want to colocate > TaskTrackers > > > on Gluster bricks to get I/O locality, yet there is no way for Gluster > to > > > export stripe locations back to Hadoop. > > > > > > It seems a poor choice. > > > > > > - Andy > > > > > > > From: Edward Capriolo > > > > Subject: Re: Using HBase on other file systems > > > > To: "hbase-user@hadoop.apache.org" > > > > Date: Wednesday, May 12, 2010, 6:38 AM > > > > On Tuesday, May 11, 2010, Jeff > > > > Hammerbacher > > > > wrote: > > > > > Hey Edward, > > > > > > > > > > I do think that if you compare GoogleFS to HDFS, GFS > > > > looks more full > > > > >> featured. > > > > >> > > > > > > > > > > What features are you missing? Multi-writer append was > > > > explicitly called out > > > > > by Sean Quinlan as a bad idea, and rolled back. From > > > > internal conversations > > > > > with Google engineers, erasure coding of blocks > > > > suffered a similar fate. > > > > > Native client access would certainly be nice, but FUSE > > > > gets you most of the > > > > > way there. Scalability/availability of the NN, RPC > > > > QoS, alternative block > > > > > placement strategies are second-order features which > > > > didn't exist in GFS > > > > > until later in its lifecycle of development as well. > > > > HDFS is following a > > > > > similar path and has JIRA tickets with active > > > > discussions. I'd love to hear > > > > > your feature requests, and I'll be sure to translate > > > > them into JIRA tickets. > > > > > > > > > > I do believe my logic is reasonable. HBase has a lot > > > > of code designed around > > > > >> HDFS. We know these tickets that get cited all > > > > the time, for better random > > > > >> reads, or for sync() support. HBase gets the > > > > benefits of HDFS and has to > > > > >> deal with its drawbacks. Other key value stores > > > > handle storage directly. > > > > >> > > > > > > > > > > Sync() works and will be in the next release, and its > > > > absence was simply a > > > > > result of the youth of the system. Now that that > > > > limitation has been > > > > > removed, please point to another place in the code > > > > where using HDFS rather > > > > > than the local file system is forcing HBase to make > > > > compromises. Your > > > > > initial attempts on this front (caching, HFile, > > > > compactions) were, I hope, > > > > > debunked by my previous email. It's also worth noting > > > > that Cassandra does > > > > > all three, despite managing its own storage. > > > > > > > > > > I'm trying to learn from this exchange and always > > > > enjoy understanding new > > > > > systems. Here's what I have so far from your > > > > arguments: > > > > > 1) HBase inherits both the advantages and > > > > disadvantages of HDFS. I clearly > > > > > agree on the general point; I'm pressing you to name > > > > some specific > > > > > disadvantages, in hopes of helping prioritize our > > > > development of HDFS. So > > > > > far, you've named things which are either a) not > > > > actually disadvantages b) > > > > > no longer true. If you can come up with the > > > > disadvantages, we'll certainly > > > > > take them into account. I've certainly got a number of > > > > them on our roadmap. > > > > > 2) If you don't want to use HDFS, you won't want to > > > > use HBase. Also > > > > > certainly true, but I'm not sure there's not much to > > > > learn from this > > > > > assertion. I'd once again ask: why would you not want > > > > to use HDFS, and what > > > > > is your choice in its stead? > > > > > > > > > > Thanks, > > > > > Jeff > > > > > > > > > > > > > Jeff, > > > > > > > > Let me first mention that you have mentioned some thing as > > > > fixed, that > > > > are only fixed in trunk. I consider trunk futureware and I > > > > do not like > > > > to have tempral conversations. Even when trunk becomes > > > > current there > > > > is no guarentee that the entire problem is solved. After > > > > all appends > > > > were fixed in .19 or not , or again? > > > > > > > > I rescanned the gfs white paper to support my argument that > > > > hdfs is > > > > stripped down. Found > > > > Writes at offset ARE supported > > > > Checkpoints > > > > Application level checkpoints > > > > Snapshot > > > > Shadow read only master > > > > > > > > hdfs chose features it wanted and ignored others that is > > > > why I called > > > > it a pure map reduce implementation. > > > > > > > > My main point, is that hbase by nature needs high speed > > > > random read > > > > and random write. Hdfs by nature is bad at these things. If > > > > you can > > > > not keep a high cache hit rate via large block cache via > > > > ram hbase is > > > > going to slam hdfs doing large block reads for small parts > > > > of files. > > > > > > > > So you ask. Me what I would use instead. I do not think > > > > there is a > > > > viable alternative in the 100 tb and up range but I do > > > > think for > > > > people in the 20 tb range somethink like gluster that is > > > > very > > > > performance focused might deliver amazing results in some > > > > applications. > > > > > > > > > > > > > > > > > > > > > I did not recommend anything > > > > "people in the 20 tb range somethink like gluster that is very > > performance focused might deliver amazing results in some > > applications." > > > > I used words like "something. like. might." > > > > It may just be an interesting avenue of research. > > > > And since you mentioned > > > > "also as pointed out in another mail, you'll want to colocate > TaskTrackers > > on Gluster bricks to get I/O locality, yet there is no way for Gluster to > > export stripe locations back to Hadoop." > > > > 1) I am sure if someone was so included they could find a way to export > > that > > information from Gluster. > > > > 2) I think you meant DataNode not TaskTracker. In any case, I remember > > reading on list that a RegionServer is not guarenteed to be colocated > with > > a > > datanode, especially after a restart. Someone was going to open a ticket > > for > > it. > > > Posting a single link from the mailing list is anecdotal. I can point to many posts on the Hadoop-user, HBase user, and every product and the world and come to the determination that the product is unstable as a result. (I am a member of gluster-users fyi) As for gluster, people are pushing it to do much more then hadoop. Most are implementing cachining and posix locks on gluster as it works as a true filesystem, not a userspace filesystem with limited semantics like HDFS, so it is going to be more complex and have more problems, but you can do with it things you can not do with hadoop. I am not claiming that GlusterFS is more/less buggy performs better/worse then HDFS. What I am hypothisizing is: GlusterFS might have sweet-spot. 20 Gluster Bricks connected by infiniban, with a total storage capacity of 50 TB. Throw hbase on that infiniban-bad boy and maybe get amazing perfomance. Just maybe. Sure HBase&Hadoop will almost assuredly scale better on the high end, but take into my account my hypothesis and use case. Maybe I have a fixed datasize but want the best performance possible. It is all about the sweat spot for your needs. I think HDFS is great, better then great, but I do not think it is the apex of storage technology, and perfect for every use case. I am not going to stop researching, theorizing, and trying alternative systems and implementations. --00504502ba2ad2595904867b23e5--