Return-Path: Delivered-To: apmail-hive-user-archive@www.apache.org Received: (qmail 89433 invoked from network); 6 Jan 2011 16:35:10 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 6 Jan 2011 16:35:10 -0000 Received: (qmail 35060 invoked by uid 500); 6 Jan 2011 16:35:09 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 34815 invoked by uid 500); 6 Jan 2011 16:35:09 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 34792 invoked by uid 99); 6 Jan 2011 16:35:08 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Jan 2011 16:35:08 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=10.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jsichi@fb.com designates 66.220.144.147 as permitted sender) Received: from [66.220.144.147] (HELO mx-out.facebook.com) (66.220.144.147) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Jan 2011 16:35:01 +0000 Received: from [192.168.18.198] ([192.168.18.198:25333] helo=mail.thefacebook.com) by mta034.snc4.facebook.com (envelope-from ) (ecelerity 2.2.2.45 r(34222M)) with ESMTP id 41/72-02013-F1FE52D4; Thu, 06 Jan 2011 08:34:40 -0800 Received: from SC-MBX04.TheFacebook.com ([169.254.3.253]) by sc-hub03.TheFacebook.com ([192.168.18.198]) with mapi id 14.01.0218.012; Thu, 6 Jan 2011 08:34:39 -0800 From: John Sichi To: "" CC: "" , "" , "" Subject: Re: Hive/Hbase Integration Error Thread-Topic: Hive/Hbase Integration Error Thread-Index: AQHLrYzOHooogMCHGkuC2Q8lrX2+jZPEqjmA Date: Thu, 6 Jan 2011 16:34:38 +0000 Message-ID: <9CC5CBBD-97B1-4302-B443-0E23C943440F@fb.com> References: <4D259AAA.5020606@orkash.com> In-Reply-To: <4D259AAA.5020606@orkash.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.18.252] Content-Type: text/plain; charset="us-ascii" Content-ID: <7482718C4E75D34389546E958F407999@fb.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Here is what you need to do: 1) Use svn to check out the source for Hive 0.6 2) In your checkout, replace the HBase 0.20.3 jars with the ones from 0.20.= 6 3) Build Hive 0.6 from source 4) Use your new Hive build JVS On Jan 6, 2011, at 2:34 AM, Adarsh Sharma wrote: > Dear all, >=20 > I am sorry I am posting this message again but I can't able to locate the= root cause after googled a lot. >=20 > I am trying Hive/Hbase Integration from the past 2 days. I am facing the = below issue while creating external table in Hive. >=20 > I am using hadoop-0.20.2, hbase-0.20.6, hive-0.6.0 ( Mysql as metstore ) = and java-1.6.0_20. Hbase-0.20.3 is also checked. >=20 > Problem arises when I issue the below command : >=20 > hive> CREATE TABLE hive_hbasetable_k(key int, value string) > > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > > WITH SERDEPROPERTIES ("hbase.columns.mapping" =3D ":key,cf1:val") > > TBLPROPERTIES ("hbase.table.name" =3D "hivehbasek"); >=20 >=20 > FAILED: Error in metadata: MetaException(message:org.apache.hadoop.hbase.= MasterNotRunningException > at org.apache.hadoop.hbase.client.HConnectionManager$TableServers= .getMaster(HConnectionManager.java:374) > at org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.ja= va:72) > at org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin= (HBaseStorageHandler.java:64) > at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTabl= e(HBaseStorageHandler.java:159) > at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTab= le(HiveMetaStoreClient.java:275) > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:= 394) > at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.jav= a:2126) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:16= 6) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107) > at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRu= nner.java:55) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java= :138) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.jav= a:197) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccess= orImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMeth= odAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:156) > FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exe= c.DDLTask >=20 >=20 > It seems my HMaster is not Running but I checked from IP:60010 that it is= running and I am able to create,insert tables in Hbase Properly. >=20 > Below is the contents of my hive.log : >=20 > 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:erro= r(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resource= s" but it cannot be resolved. > 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin (Log4JLogger.java:error= (115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources= " but it cannot be resolved. > 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error= (115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" = but it cannot be resolved. > 2011-01-05 15:19:36,785 ERROR DataNucleus.Plugin (Log4JLogger.java:error= (115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" = but it cannot be resolved. > 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error= (115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it c= annot be resolved. > 2011-01-05 15:19:36,786 ERROR DataNucleus.Plugin (Log4JLogger.java:error= (115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it c= annot be resolved. > 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn (ClientCnxn.java:run(= 967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@561279= c8 > java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.ja= va:574) > at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:= 933) > 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn (ClientCnxn.java:clea= nup(1001)) - Ignoring exception during shutdown input > java.nio.channels.ClosedChannelException > at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.ja= va:638) > at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) > at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.j= ava:999) > at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:= 970) > 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn (ClientCnxn.java:clea= nup(1006)) - Ignoring exception during shutdown output > java.nio.channels.ClosedChannelException > at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.j= ava:649) > at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) > at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.j= ava:1004) > at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:= 970) > 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn (ClientCnxn.java:run(= 967)) - Exception closing session 0x0 to sun.nio.ch.SelectionKeyImpl@799dbc= 3b > =20 > Please help me, as i am not able to solve this problem. > =20 > Also I want to add one more thing that my hadoop Cluster is of 9 nodes a= nd 8 nodes act as Datanodes,Tasktrackers and Regionservers. > =20 > Among these nodes is set zookeeper.quorum.property to have 5 Datanodes. = I don't know the number of servers needed for Zookeeper in fully distribute= d mode. > =20 > =20 > Best Regards >=20 > Adarsh Sharma >=20 >=20 >=20