hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yonghu <yongyong...@gmail.com>
Subject Re: What's the Common Way to Execute an HBase Job?
Date Wed, 12 Feb 2014 06:31:17 GMT

To process the data in Hbase. You can have different options.

1. Java program using Hbase api;
2. MapReduce program;
3. High-level languages, such as Hive or Pig (built on top of MapReduce);
4. Phoenix also a High-level language (built based on coprocessor).

which one you should use depends on your requirements.


On Wed, Feb 12, 2014 at 7:18 AM, Ji ZHANG <zhangji87@gmail.com> wrote:

> Hi,
> I'm using the HBase Client API to connect to a remote cluster and do
> some operations. This project will certainly require hbase and
> hadoop-core jars. And my question is whether I should use 'java'
> command and handle all the dependencies (using maven shaded plugin, or
> set the classpath environment), or there's a magic utility command to
> handle all these for me?
> Take map-redcue job for an instance. Typically the main class will
> extend Configured and implement Tool. The job will be executed by
> 'hadoop jar' command and all environment and hadoop-core dependency
> are at hand. This approach also handles the common command line
> parsing for me, and I can easily get an instance of Configuration by
> 'this.getConf()';
> I'm wondering whether HBase provides the same utiliy command?
> Thanks,
> Jerry

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message