hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Omkar Joshi <Omkar.Jo...@lntinfotech.com>
Subject Unable to connect from windows desktop to HBase
Date Mon, 15 Apr 2013 03:42:03 GMT

I'm trying to connect from a Java client running on a Windows desktop machine to a remote
HBase cluster running in the distributed mode(atop 2-node Hadoop cluster).

1.      On the master(namenode)node :
9161 HMaster
4536 SecondaryNameNode
4368 DataNode
4645 JobTracker
8395 Jps
4813 TaskTracker
4179 NameNode
7717 Main

2.       On the slave(datanode)node :

hduser@cldx-1140-1034:~$ jps

5677 HRegionServer

5559 HQuorumPeer

2634 TaskTracker

3260 Jps

2496 DataNode

3.       I also connected to the shell and created a hbase table and also able to scan it

hbase(main):004:0> scan 'CUSTOMERS'

ROW                              COLUMN+CELL

 CUSTID12345                     column=CUSTOMER_INFO:EMAIL, timestamp=1365600369284, value=Omkar.Joshi@lntinfotech.com

 CUSTID12345                     column=CUSTOMER_INFO:NAME, timestamp=1365600052104, value=Omkar

 CUSTID614                       column=CUSTOMER_INFO:NAME, timestamp=1365601350972, value=Prachi

2 row(s) in 0.8760 seconds

4.       The hbase-site.xml has the following configurations:





    <description>The directory shared by RegionServers.






    <description>Property from ZooKeeper's config zoo.cfg.

    The directory where the snapshot is stored.






    <description>The directory shared by RegionServers.</description>





    <description>The mode the cluster will be in. Possible values are false: standalone
and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged
Zookeeper Quorum (see hbase-env.sh)</description>



5.       The Hadoop's core-site.xml has the following settings :







6.       My java client class is :

package client.hbase;

import java.io.IOException;

import java.util.List;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.hbase.HBaseConfiguration;

import org.apache.hadoop.hbase.HColumnDescriptor;

import org.apache.hadoop.hbase.HTableDescriptor;

import org.apache.hadoop.hbase.client.HBaseAdmin;

import org.apache.hadoop.hbase.client.HTableInterface;

import org.apache.hadoop.hbase.client.HTablePool;

import org.apache.hadoop.hbase.client.Put;

import org.apache.hadoop.hbase.client.Result;

import org.apache.hadoop.hbase.client.ResultScanner;

import org.apache.hadoop.hbase.client.Scan;

import org.apache.hadoop.hbase.filter.CompareFilter;

import org.apache.hadoop.hbase.filter.Filter;

import org.apache.hadoop.hbase.filter.RegexStringComparator;

import org.apache.hadoop.hbase.filter.ValueFilter;

import org.apache.hadoop.hbase.util.Bytes;

public class HBaseCRUD {

    private static Configuration config;

    private static HBaseAdmin hbaseAdmin;

    private static HTablePool hTablePool;

    static {

        config = HBaseConfiguration.create();

        config.set("hbase.zookeeper.quorum", "");

        hTablePool = new HTablePool(config, 2);



     * @param args

     * @throws IOException


    public static void main(String[] args) throws IOException {

        // TODO Auto-generated method stub

        HBaseCRUD hbaseCRUD = new HBaseCRUD();

        /*hbaseCRUD.createTables("CUSTOMERS", "CUSTOMER_INFO");


        hbaseCRUD.scanTable("CUSTOMERS", "CUSTOMER_INFO", "EMAIL");


    private void createTables(String tableName, String... columnFamilyNames)

            throws IOException {

        // TODO Auto-generated method stub

        HTableDescriptor tableDesc = new HTableDescriptor(tableName);

        if (!(columnFamilyNames == null || columnFamilyNames.length == 0)) {

            for (String columnFamilyName : columnFamilyNames) {

                HColumnDescriptor columnFamily = new HColumnDescriptor(







    private void populateTableData(String tableName) throws IOException {

        HTableInterface tbl = hTablePool.getTable(Bytes.toBytes(tableName));

        List<Put> tblRows = getTableData(tableName);



    private List<Put> getTableData(String tableName) {

        // TODO Auto-generated method stub

        if (tableName == null || tableName.isEmpty())

            return null;

        /* Pull data from wherever located */

        if (tableName.equalsIgnoreCase("CUSTOMERS")) {


             * Put p1 = new Put(); p1.add(Bytes.toBytes("CUSTOMER_INFO"),

             * Bytes.toBytes("NAME"), value)



        return null;


    private void scanTable(String tableName, String columnFamilyName,

            String... columnNames) throws IOException {

        System.out.println("In HBaseCRUD.scanTable(...)");

        Scan scan = new Scan();

        if (!(columnNames == null || columnNames.length <= 0)) {

            for (String columnName : columnNames) {




            Filter filter = new ValueFilter(CompareFilter.CompareOp.EQUAL,

                    new RegexStringComparator("lntinfotech"));



        HTableInterface tbl = hTablePool.getTable(Bytes.toBytes(tableName));

        ResultScanner scanResults = tbl.getScanner(scan);

        for (Result result : scanResults) {

            System.out.println("The result is " + result);





7.       The exception I get is :

In HBaseCRUD.scanTable(...)

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Apr 10, 2013 4:24:54 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper <init>

INFO: The identifier of this process is 3648@INFVA03351

Apr 10, 2013 4:24:56 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper retryOrThrow

WARNING: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

Apr 10, 2013 4:24:56 PM org.apache.hadoop.hbase.util.RetryCounter sleepUntilNextRetry

INFO: Sleeping 2000ms before retry #1...

Apr 10, 2013 4:24:58 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper retryOrThrow

WARNING: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

Apr 10, 2013 4:24:58 PM org.apache.hadoop.hbase.util.RetryCounter sleepUntilNextRetry

INFO: Sleeping 4000ms before retry #2...

I have also made the master's entry in my local machine's host file. What can be the error?

Thanks and regards !

The contents of this e-mail and any attachment(s) may contain confidential or privileged information
for the intended recipient(s). Unintended recipients are prohibited from taking action on
the basis of information in this e-mail and using or disseminating the information, and must
notify the sender and delete it from their system. L&T Infotech will not accept responsibility
or liability for the accuracy or completeness of, or the presence of any virus or disabling
code in this e-mail"

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message