hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rita <rmorgan...@gmail.com>
Subject Re: large data and hbase
Date Tue, 12 Jul 2011 10:01:01 GMT
This is encouraging.

¨Make sure HDFS is running first. Start and stop the Hadoop HDFS daemons by
running bin/start-hdfs.sh over in the HADOOP_HOME directory. You can ensure
it started properly by testing the *put* and *get* of files into the Hadoop
filesystem. HBase does not normally use the mapreduce daemons. These do not
need to be started.¨

On Mon, Jul 11, 2011 at 1:40 PM, Bharath Mundlapudi
<bharathwork@yahoo.com>wrote:

> Another option to look at is Pig Or Hive. These need MapReduce.
>
>
> -Bharath
>
>
>
> ________________________________
> From: Rita <rmorgan466@gmail.com>
> To: "<common-user@hadoop.apache.org>" <common-user@hadoop.apache.org>
> Sent: Monday, July 11, 2011 4:31 AM
> Subject: large data and hbase
>
> I have a dataset which is several terabytes in size. I would like to query
> this data using hbase (sql). Would I need to setup mapreduce to use hbase?
> Currently the data is stored in hdfs and I am using `hdfs -cat ` to get the
> data and pipe it into stdin.
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>



-- 
--- Get your facts first, then you can distort them as you please.--

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message