hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Meil <doug.m...@explorysmedical.com>
Subject Re: Need Help with HBase
Date Wed, 17 Aug 2011 01:06:45 GMT

(Removing the MR dist-list)

It's still true that the underlying filesystem should be HDFS.  For
development it can be standalone, but that's a different story.





On 8/16/11 11:53 AM, "Taylor, Ronald C" <ronald.taylor@pnnl.gov> wrote:

>Hello MS,
>
>Re file systems: while HBase can theoretically run on other scalable file
>systems, I remember somebody on the HBase list saying, in effect, that
>unless you are a file system guru and willing to put in a heck of a lot
>of work, the only practical choice as an underlying file system is
>Hadoop's HDFS. I think that was something like half a year ago or more,
>so maybe things have changed.  Any of the HBase developers on the HBase
>list have an update (or a correction to my recollection)?
>
>Ron
>
>Ronald Taylor, Ph.D.
>Computational Biology & Bioinformatics Group
>Pacific Northwest National Laboratory (U.S. Dept of Energy/Battelle)
>Richland, WA 99352
>phone: (509) 372-6568
>email: ronald.taylor@pnnl.gov
>
>From: M S Vishwanath Bhat [mailto:msvbhat@gmail.com]
>Sent: Tuesday, August 16, 2011 12:29 AM
>To: mapreduce-user@hadoop.apache.org
>Subject: Re: Need Help with HBase
>
>Hi,
>
>Just need small clarification.
>
>HBase is used only to create and maintain Big Tables. Like we can use
>HBase to create, append, extend etc etc.. And it runs on any file system.
>Like if we point  "rootdir" property in file hbase-site.xml to nfs mount
>point, it should still work. Habse doesn't even need Hadoop to create and
>maintain large tables.  BUT the significance of hadoop comes into the
>scene only when I want to run a map/reduce applications on a large table
>created by HBase.
>
>Is my above understanding correct? Can anyone please explain if I am
>wrong?
>
>Thanks,
>MS
>On 12 August 2011 00:31, Corey M. Dorwart
><cdorwart@clearedgeit.com<mailto:cdorwart@clearedgeit.com>> wrote:
>Hello MS-
>
>Welcome to Hadoop MapReduce programming!
>
>The first step is to follow the MapReduce tutorial on apache's website
>(http://hadoop.apache.org/common/docs/current/mapred_tutorial.html).
>Without much Java experience you are going to be at a disadvantage, but
>you are not alone. You may want to give Apache's Pig a go
>(http://pig.apache.org/). Pig is a much simpler way to program in
>MapReduce which more closely resembles a SQL language; Pig is an
>intermediate between you and MapReduce code. They have great tutorials on
>that as well.
>
>Most MapReduce code is requirement specific but doing your first Word
>Count applications are simple and can be found readily on the web.
>
>Good Luck!
>
>-Corey
>
>From: M S Vishwanath Bhat
>[mailto:msvbhat@gmail.com<mailto:msvbhat@gmail.com>]
>Sent: Thursday, August 11, 2011 3:00 PM
>To: 
>mapreduce-user@hadoop.apache.org<mailto:mapreduce-user@hadoop.apache.org>
>Subject: Need Help with HBase
>
>Hi,
>
>I'm a newbie to the Hadoop and Map/Reduce applications. I have set-up a
>cluster and just running the example map/reduce applications which comes
>with the Hadoop source code.
>
>I want to run some more applications. But I'm not a java developer.
>
>So if there's anyone who is willing to share the map/reduce applications
>they wrote, it would be of great help me. If you are willing to share
>please do so with me.
>
>
>Thanks in Advance,
>
>
>Cheers,
>MS
>


Mime
View raw message