Return-Path: Delivered-To: apmail-hive-user-archive@www.apache.org Received: (qmail 95185 invoked from network); 10 Jan 2011 06:05:47 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 10 Jan 2011 06:05:47 -0000 Received: (qmail 1082 invoked by uid 500); 10 Jan 2011 06:05:47 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 726 invoked by uid 500); 10 Jan 2011 06:05:45 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 712 invoked by uid 99); 10 Jan 2011 06:05:44 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Jan 2011 06:05:44 +0000 X-ASF-Spam-Status: No, hits=1.5 required=10.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jdcryans@gmail.com designates 209.85.161.48 as permitted sender) Received: from [209.85.161.48] (HELO mail-fx0-f48.google.com) (209.85.161.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Jan 2011 06:05:39 +0000 Received: by fxm2 with SMTP id 2so18340974fxm.35 for ; Sun, 09 Jan 2011 22:05:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:content-type; bh=xRuvQ6AHqLg9ns9Y5+uCiwg/I2QCqK+gf+X9DF29rC4=; b=L/rvU1081ZDFrOdYajBVEj7cmNK0aw/IF5Ks4/jx4VOlA7Ys+OAaT6qdzyCQIpgDY5 G37J7w10u+nLhqorPZvXod4U4Aoa4Wf0clZygwd5r99ADlF6OukuHlWTBoJ4XspECAaF bdqVzHcGU0AeWlQUNORF+ZlmWUddTYk+IQ7vo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; b=K0HgUb+9vwzXWucn/Jl3R1BAyhKQsVBniPiXH7oK6kIA6GuAuY5hWs3IZfsbQ2PREy LZHV8Zt2iaXNAwv7L0+EtPyIZDfU14BI2A1hTXLV7gybJSiMzOQ9a6fiBHi0qRgFH/YS dBZtyCvgxybnF5FHvvJpxbQq04pQ+7jg4+2Zo= MIME-Version: 1.0 Received: by 10.223.83.4 with SMTP id d4mr2154371fal.59.1294639517805; Sun, 09 Jan 2011 22:05:17 -0800 (PST) Sender: jdcryans@gmail.com Received: by 10.223.96.65 with HTTP; Sun, 9 Jan 2011 22:05:17 -0800 (PST) Received: by 10.223.96.65 with HTTP; Sun, 9 Jan 2011 22:05:17 -0800 (PST) In-Reply-To: <4D2A9A54.7040301@orkash.com> References: <4D259AAA.5020606@orkash.com> <9CC5CBBD-97B1-4302-B443-0E23C943440F@fb.com> <4D26AA69.9060200@orkash.com> <77677D44-46FE-47C3-AEF6-B78AB167DCA6@fb.com> <4D270CEC.9010102@orkash.com> <4D2A9A54.7040301@orkash.com> Date: Sun, 9 Jan 2011 22:05:17 -0800 X-Google-Sender-Auth: 75Lby73VmoaqUG_A8fte5czbWzI Message-ID: Subject: Re: Hive/Hbase Integration Error From: Jean-Daniel Cryans To: user@hive.apache.org Content-Type: multipart/alternative; boundary=20cf3054a529922ef9049977c0a2 --20cf3054a529922ef9049977c0a2 Content-Type: text/plain; charset=ISO-8859-1 You also need to create the table in order to see the relevant debug information, it won't create it until it needs it. J-D On Jan 9, 2011 9:30 PM, "Adarsh Sharma" wrote: > Jean-Daniel Cryans wrote: >> Just figured that running the shell with this command will give all >> the info you need: >> >> bin/hive -hiveconf hive.root.logger=INFO,console >> > > > Thanks JD, below is the output of this command : > > hadoop@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf > hive.root.logger=INFO,console > Hive history > file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt > 11/01/10 10:24:47 INFO exec.HiveHistory: Hive history > file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt > hive> show tables; > 11/01/10 10:25:07 INFO parse.ParseDriver: Parsing command: show tables > 11/01/10 10:25:07 INFO parse.ParseDriver: Parse Completed > 11/01/10 10:25:07 INFO ql.Driver: Semantic Analysis Completed > 11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema: > Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, > comment:from deserializer)], properties:null) > 11/01/10 10:25:07 INFO ql.Driver: Starting command: show tables > 11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store > with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore > 11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, initialize called > *11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle > "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it > cannot be resolved. > 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle > "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot > be resolved. > 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle > "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be > resolved.* > 11/01/10 10:25:09 INFO metastore.ObjectStore: Initialized ObjectStore > 11/01/10 10:25:10 INFO metastore.HiveMetaStore: 0: get_tables: > db=default pat=.* > OK > 11/01/10 10:25:15 INFO ql.Driver: OK > Time taken: 7.897 seconds > 11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds > hive> exit; > > It seems that Hive is working but I am facing issues while integrating > with Hbase. > > > Best Regards > > Adarsh Sharma > > >> J-D >> >> On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans wrote: >> >>> While testing other things yesterday on my local machine, I >>> encountered the same stack traces. Like I said the other day, which >>> you seem to have discarded while debugging your issue, is that it's >>> not able to connect to Zookeeper. >>> >>> Following the cue, I added these lines in HBaseStorageHandler.setConf(): >>> >>> System.out.println(hbaseConf.get("hbase.zookeeper.quorum")); >>> System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort")); >>> >>> It showed me this when trying to create a table (after recompiling): >>> >>> localhost >>> 21810 >>> >>> I was testing with 0.89 and the test jar includes a hbase-site.xml >>> which has the port 21810 instead of the default 2181. I remembered >>> that it's a known issue that has since been fixed for 0.90.0, so >>> removing that jar fixed it for me. >>> >>> I'm not saying that in your case it's the same fix, but at least by >>> debugging those configurations you'll know where it's trying to >>> connect and then you'll be able to get to the bottom of your issue. >>> >>> J-D >>> >>> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma wrote: >>> >>>> John Sichi wrote: >>>> >>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote: >>>> >>>> >>>> I want to know why it occurs in hive.log >>>> >>>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin >>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires >>>> "org.eclipse.core.resources" but it cannot be resolved. >>>> >>>> >>>> >>>> That is a bogus error; it always shows up, so you can ignore it. >>>> >>>> >>>> >>>> And use this new Hive build but I am sorry but the error remains the same. >>>> >>>> >>>> Then I don't know...probably still some remaining configuration error. This >>>> guy seems to have gotten it working: >>>> >>>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/ >>>> >>>> >>>> Thanks a lot John , I know this link as i have start working by following >>>> this link in the past. >>>> >>>> But I think I have to research on below exception or warning to solve this >>>> issue. >>>> >>>> 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn >>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to >>>> sun.nio.ch.SelectionKeyImpl@561279c8 >>>> java.net.ConnectException: Connection refused >>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >>>> at >>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574) >>>> at >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933) >>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn >>>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown input >>>> java.nio.channels.ClosedChannelException >>>> at >>>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638) >>>> at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) >>>> at >>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999) >>>> at >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) >>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn >>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown output >>>> java.nio.channels.ClosedChannelException >>>> at >>>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649) >>>> at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) >>>> at >>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004) >>>> at >>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970) >>>> 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn >>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to >>>> sun.nio.ch.SelectionKeyImpl@799dbc3b >>>> >>>> Please help me, as i am not able to solve this problem. >>>> >>>> Also I want to add one more thing that my hadoop Cluster is of 9 nodes and >>>> 8 nodes act as Datanodes,Tasktrackers and Regionservers. >>>> >>>> >>>> >>>> >>>> Best Regards >>>> >>>> Adarsh Sharma >>>> >>>> JVS >>>> >>>> >>>> >>>> > --20cf3054a529922ef9049977c0a2 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

You also need to create the table in order to see the relevant debug inf= ormation, it won't create it until it needs it.

J-D

On Jan 9, 2011 9:30 PM, "Adarsh Sharma"= ; <adarsh.sharma@orkash.com<= /a>> wrote:
> Jean-Daniel Cryans wrote:
&g= t;> Just figured that running the shell with this command will give all<= br> >> the info you need:
>>
>> bin/hive -hiveconf hive= .root.logger=3DINFO,console
>>
>
>
> Thanks= JD, below is the output of this command :
>
> hadoop@s2-ratw-= 1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf
> hive.root.logger=3DINFO,console
> Hive history
> file=3D/= tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> 11/01/10= 10:24:47 INFO exec.HiveHistory: Hive history
> file=3D/tmp/hadoop/h= ive_job_log_hadoop_201101101024_1339616584.txt
> hive> show tables;
> 11/01/10 10:25:07 INFO parse.ParseDriver= : Parsing command: show tables
> 11/01/10 10:25:07 INFO parse.ParseDr= iver: Parse Completed
> 11/01/10 10:25:07 INFO ql.Driver: Semantic An= alysis Completed
> 11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema:
> Sche= ma(fieldSchemas:[FieldSchema(name:tab_name, type:string,
> comment:f= rom deserializer)], properties:null)
> 11/01/10 10:25:07 INFO ql.Driv= er: Starting command: show tables
> 11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store <= br>> with implemenation class:org.apache.hadoop.hive.metastore.ObjectSto= re
> 11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, initi= alize called
> *11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org= .eclipse.jdt.core" requires "org.eclipse.core.resources" but= it
> cannot be resolved.
> 11/01/10 10:25:08 ERROR DataNucleu= s.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.core.runti= me" but it cannot
> be resolved.
> 11/01/10 10:25:08 ERRO= R DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requ= ires "org.eclipse.text" but it cannot be
> resolved.*
> 11/01/10 10:25:09 INFO metastore.ObjectStore: Initi= alized ObjectStore
> 11/01/10 10:25:10 INFO metastore.HiveMetaStore: = 0: get_tables:
> db=3Ddefault pat=3D.*
> OK
> 11/01/10 1= 0:25:15 INFO ql.Driver: OK
> Time taken: 7.897 seconds
> 11/01/10 10:25:15 INFO CliDriver: Ti= me taken: 7.897 seconds
> hive> exit;
>
> It seems th= at Hive is working but I am facing issues while integrating
> with H= base.
>
>
> Best Regards
>
> Adarsh Sharma
> =
>
>> J-D
>>
>> On Fri, Jan 7, 2011 at 9:= 57 AM, Jean-Daniel Cryans <
jdcrya= ns@apache.org> wrote:
>>
>>> While testing other things yesterday on my loca= l machine, I
>>> encountered the same stack traces. Like I said= the other day, which
>>> you seem to have discarded while debu= gging your issue, is that it's
>>> not able to connect to Zookeeper.
>>>
>>&= gt; Following the cue, I added these lines in HBaseStorageHandler.setConf()= :
>>>
>>> System.out.println(hbaseConf.get("hb= ase.zookeeper.quorum"));
>>> System.out.println(hbaseConf.get("hbase.zookeeper.propert= y.clientPort"));
>>>
>>> It showed me this whe= n trying to create a table (after recompiling):
>>>
>>= > localhost
>>> 21810
>>>
>>> I was testing with 0.89 = and the test jar includes a hbase-site.xml
>>> which has the po= rt 21810 instead of the default 2181. I remembered
>>> that it&= #39;s a known issue that has since been fixed for 0.90.0, so
>>> removing that jar fixed it for me.
>>>
>>= > I'm not saying that in your case it's the same fix, but at lea= st by
>>> debugging those configurations you'll know where = it's trying to
>>> connect and then you'll be able to get to the bottom of yo= ur issue.
>>>
>>> J-D
>>>
>>&g= t; On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma <adarsh.sharma@orkash.com> wrote:
>>>
>>>> John Sichi wrote:
>>>>=
>>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
&g= t;>>>
>>>>
>>>> I want to know why i= t occurs in hive.log
>>>>
>>>> 2011-01-05 15:19:36,783 ERROR DataNucl= eus.Plugin
>>>> (Log4JLogger.java:error(115)) - Bundle "= ;org.eclipse.jdt.core" requires
>>>> "org.eclipse.= core.resources" but it cannot be resolved.
>>>>
>>>>
>>>>
>>>>= ; That is a bogus error; it always shows up, so you can ignore it.
>&= gt;>>
>>>>
>>>>
>>>> And= use this new Hive build but I am sorry but the error remains the same.
>>>>
>>>>
>>>> Then I don't k= now...probably still some remaining configuration error. This
>>&= gt;> guy seems to have gotten it working:
>>>>
>>= ;>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/<= br> >>>>
>>>>
>>>> Thanks a lot John = , I know this link as i have start working by following
>>>>= this link in the past.
>>>>
>>>> But I think= I have to research on below exception or warning to solve this
>>>> issue.
>>>>
>>>> 2011-01-05= 15:20:12,185 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.ja= va:run(967)) - Exception closing session 0x0 to
>>>> sun.nio= .ch.SelectionKeyImpl@561279c8
>>>> java.net.ConnectException: Connection refused
>>= >> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)=
>>>> at
>>>> sun.nio.ch.SocketChannel= Impl.finishConnect(SocketChannelImpl.java:574)
>>>> at
>>>> org.apache.zookeeper.ClientC= nxn$SendThread.run(ClientCnxn.java:933)
>>>> 2011-01-05 15:= 20:12,188 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:c= leanup(1001)) - Ignoring exception during shutdown input
>>>> java.nio.channels.ClosedChannelException
>>>&= gt; at
>>>> sun.nio.ch.SocketChannelImpl.shutdownInpu= t(SocketChannelImpl.java:638)
>>>> at sun.nio.ch.Sock= etAdaptor.shutdownInput(SocketAdaptor.java:360)
>>>> at
>>>> org.apache.zookeeper.ClientC= nxn$SendThread.cleanup(ClientCnxn.java:999)
>>>> at>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnx= n.java:970)
>>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>= ;>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during s= hutdown output
>>>> java.nio.channels.ClosedChannelExceptio= n
>>>> at
>>>> sun.nio.ch.SocketChannelImpl= .shutdownOutput(SocketChannelImpl.java:649)
>>>> at s= un.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>&= gt;> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientC= nxn.java:1004)
>>>> at
>>>> org.apache= .zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>&g= t; 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0= to
>>>> sun.nio.ch.SelectionKeyImpl@799dbc3b
>>>= ;>
>>>> Please help me, as i am not able to solve this = problem.
>>>>
>>>> Also I want to add one more thing tha= t my hadoop Cluster is of 9 nodes and
>>>> 8 nodes act as Da= tanodes,Tasktrackers and Regionservers.
>>>>
>>>= >
>>>>
>>>>
>>>> Best Regards
&= gt;>>>
>>>> Adarsh Sharma
>>>>
&= gt;>>> JVS
>>>>
>>>>
>>>= >
>>>>
>
--20cf3054a529922ef9049977c0a2--