Return-Path: Delivered-To: apmail-hive-user-archive@www.apache.org Received: (qmail 70915 invoked from network); 7 Jan 2011 05:50:25 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 7 Jan 2011 05:50:25 -0000 Received: (qmail 83451 invoked by uid 500); 7 Jan 2011 05:50:25 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 83320 invoked by uid 500); 7 Jan 2011 05:50:24 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 83311 invoked by uid 99); 7 Jan 2011 05:50:23 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Jan 2011 05:50:23 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [207.97.245.201] (HELO smtp201.iad.emailsrvr.com) (207.97.245.201) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Jan 2011 05:50:16 +0000 Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp50.relay.iad1a.emailsrvr.com (SMTP Server) with ESMTP id 13AEB37102C; Fri, 7 Jan 2011 00:49:55 -0500 (EST) X-Virus-Scanned: OK Received: by smtp50.relay.iad1a.emailsrvr.com (Authenticated sender: adarsh.sharma-AT-orkash.com) with ESMTPSA id 60B3E370BBB for ; Fri, 7 Jan 2011 00:49:53 -0500 (EST) Message-ID: <4D26AA69.9060200@orkash.com> Date: Fri, 07 Jan 2011 11:23:45 +0530 From: Adarsh Sharma User-Agent: Thunderbird 2.0.0.22 (X11/20090625) MIME-Version: 1.0 To: user@hive.apache.org Subject: Re: Hive/Hbase Integration Error References: <4D259AAA.5020606@orkash.com> <9CC5CBBD-97B1-4302-B443-0E23C943440F@fb.com> In-Reply-To: <9CC5CBBD-97B1-4302-B443-0E23C943440F@fb.com> Content-Type: multipart/alternative; boundary="------------020706070805070206000200" X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --------------020706070805070206000200 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit John Sichi wrote: > Here is what you need to do: > > 1) Use svn to check out the source for Hive 0.6 > I download Hive-0.6.0 source code with the command svn co http://svn.apache.org/repos/asf/hive/branches/branch-0.6/ hive-0.6.0 > 2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6 > Replace hbase-0.20.3.jar,hbase-0.20.3.test.jar by hbase-0.20.6.jar and hbase-0.20.6.test jars in Hive-0.6.0/hbase-handler/lib folder > 3) Build Hive 0.6 from source > Then Build the hive package by *ant -Dhadoop.version=0.20.0 package *command Am I doing something wrong. I want to know why it occurs in hive.log 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved. With Best Regards Adarsh Sharma > 4) Use your new Hive build > And use this new Hive build but I am sorry but the error remains the same. > JVS > > On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote: > > >> Dear all, >> >> I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot. >> >> I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive. >> >> I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked. >> >> Problem arises when I issue the below command : >> >> hive> CREATE TABLE hive_hbasetable_k(key int, value string) >> > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' >> > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") >> > TBLPROPERTIES ("hbase.table.name" = "hivehbasek"); >> >> >> FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException >> at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374) >> at org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:72) >> at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64) >> at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159) >> at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275) >> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394) >> at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126) >> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166) >> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107) >> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55) >> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633) >> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506) >> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384) >> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138) >> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197) >> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at org.apache.hadoop.util.RunJar.main(RunJar.java:156) >> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask >> >> >> It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly. >> >> Below is the contents of my hive.log : >> >> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved. >> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved. >> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved. >> 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved. >> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved. >> 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved. >> 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8 >> java.net.ConnectException: Connection refused >> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574) >> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933) >> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input >> java.nio.channels.ClosedChannelException >> at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638) >> at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) >> at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999) >> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) >> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output >> java.nio.channels.ClosedChannelException >> at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649) >> at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) >> at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004) >> at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) >> 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b >> >> Please help me, as i am not able to solve this problem. >> >> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers. >> >> Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode. >> >> >> Best Regards >> >> Adarsh Sharma >> >> >> >> > > --------------020706070805070206000200 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit John Sichi wrote:
Here is what you need to do:

1) Use svn to check out the source for Hive 0.6
  
I download Hive-0.6.0 source code with the command

 svn co http://svn.apache.org/repos/asf/hive/branches/branch-0.6/ hive-0.6.0


2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.6
  
Replace hbase-0.20.3.jar,hbase-0.20.3.test.jar by hbase-0.20.6.jar and hbase-0.20.6.test jars in Hive-0.6.0/hbase-handler/lib folder
3) Build Hive 0.6 from source
  
Then Build the hive package by ant -Dhadoop.version=0.20.0 package command
Am I doing something wrong.

I want to know why it occurs in hive.log

2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.


With Best Regards

Adarsh Sharma

4) Use your new Hive build
  

And use this new Hive build but I am sorry but the error remains the same.

JVS

On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote:

  
Dear all,

I am sorry I am posting this message again but I can't able to locate the root cause after googled a lot.

I am trying Hive/Hbase Integration from the past 2 days. I am facing the below issue while creating external table in Hive.

I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) and java-1.6.0_20. Hbase-0.20.3 is also checked.

Problem arises when I issue the below command :

hive> CREATE TABLE hive_hbasetable_k(key int, value string)
    > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
    > TBLPROPERTIES ("hbase.table.name" = "hivehbasek");


FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException
        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster(HConnectionManager.java:374)
        at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:72)
        at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:64)
        at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:159)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:275)
        at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:394)
        at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:2126)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:166)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask


It seems my HMaster is not Running but I checked from IP:60010 that it is running and I am able to create,insert tables in Hbase Properly.

Below is the contents of my hive.log :

  2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.
 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.
 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.
 2011-01-05 15:20:12,185 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279c8
 java.net.ConnectException: Connection refused
       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input
 java.nio.channels.ClosedChannelException
       at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
       at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 2011-01-05 15:20:12,188 WARN  zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output
 java.nio.channels.ClosedChannelException
       at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
       at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
       at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
 2011-01-05 15:20:12,621 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc3b
 
  Please help me, as i am not able to solve this problem.
 
 Also I want to add one more thing that my hadoop Cluster is of 9 nodes and 8 nodes act as Datanodes,Tasktrackers and Regionservers.
 
 Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. I don't know the number of servers needed for Zookeeper in fully distributed mode.
 
 
 Best Regards

 Adarsh Sharma



    

  

--------------020706070805070206000200--