Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2B3CC2B23 for ; Wed, 27 Apr 2011 23:55:06 +0000 (UTC) Received: (qmail 79946 invoked by uid 500); 27 Apr 2011 23:55:05 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 79783 invoked by uid 500); 27 Apr 2011 23:55:05 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 79775 invoked by uid 99); 27 Apr 2011 23:55:05 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Apr 2011 23:55:05 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sharmavipulwork@gmail.com designates 74.125.83.176 as permitted sender) Received: from [74.125.83.176] (HELO mail-pv0-f176.google.com) (74.125.83.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Apr 2011 23:54:59 +0000 Received: by pve37 with SMTP id 37so2127317pve.35 for ; Wed, 27 Apr 2011 16:54:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=B59uaEAXXlCzADnAp0pOJMVfwce8AwyIVEI5PjMdVN8=; b=kVkedUHwuQAXmn6wfQFp9278yH2Dv56j1TlaYq2JXQ8Oeu6wmdaBKIs8l8mJvWADzF +K8+qpUwhe80KsmWi0DW1P+ZoreT1qJkhyVoCsQz6eCeTQqMgRm3LjuR09GoOjcY+glL vDWZxcJMrK+bsqBR+J9BPgm4EnFD/QECHD7Yg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=admANl3SSxbRIRB+/v8HpxsUweh217jifkwMtWAPCDdbtkom/Qbxw7mEjHJAKpkwMx J+bq5bqf5H+2gRQ7z52aNhTHYJ2ng56Wyry5Z6j9FpCney6EL3bkTnTeeidpq6ntkIZj /dPoVw2N2FTFww8cMxXvE5MD9M9/GF7aMLX5Q= MIME-Version: 1.0 Received: by 10.68.29.228 with SMTP id n4mr2825403pbh.500.1303948478538; Wed, 27 Apr 2011 16:54:38 -0700 (PDT) Received: by 10.68.56.72 with HTTP; Wed, 27 Apr 2011 16:54:38 -0700 (PDT) In-Reply-To: References: Date: Wed, 27 Apr 2011 16:54:38 -0700 Message-ID: Subject: Re: Errors in hive queries From: vipul sharma To: user@hive.apache.org Content-Type: multipart/alternative; boundary=bcaec520ee77de61e904a1ef292f --bcaec520ee77de61e904a1ef292f Content-Type: text/plain; charset=ISO-8859-1 This got resolved. Some of the clients were not upgraded to CHD3 and were running old version. On Wed, Apr 27, 2011 at 2:27 PM, vipul sharma wrote: > We have a hadoop/hive cluster which is using cloudera's distribution. > Metastore is stored in mysql and all the relevant drivers are in classpath > and in conf files. While running queries on hive I am getting these errors > > $ hive -hiveconf hive.root.logger=INFO,console > Hive history file=/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt > 11/04/27 14:09:01 INFO exec.HiveHistory: Hive history > file=/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt > hive> select time from requests_stat_min; > 11/04/27 14:15:38 INFO parse.ParseDriver: Parsing command: select time from > requests_stat_min > 11/04/27 14:15:38 INFO parse.ParseDriver: Parse Completed > 11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Starting Semantic Analysis > 11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Completed phase 1 of > Semantic Analysis > 11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Get metadata for source > tables > 11/04/27 14:15:39 INFO metastore.HiveMetaStore: 0: Opening raw store with > implemenation class:org.apache.hadoop.hive.metastore.ObjectStore > 11/04/27 14:15:39 INFO metastore.ObjectStore: ObjectStore, initialize > called > 11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" > requires "org.eclipse.core.resources" but it cannot be resolved. > 11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" > requires "org.eclipse.core.runtime" but it cannot be resolved. > 11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" > requires "org.eclipse.text" but it cannot be resolved. > 11/04/27 14:15:40 INFO metastore.ObjectStore: Initialized ObjectStore > 11/04/27 14:15:40 INFO metastore.HiveMetaStore: 0: get_table : db=default > tbl=requests_stat_min > 11/04/27 14:15:41 INFO hive.log: DDL: struct requests_stat_min { string > time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders, > i64 unique_events, i64 start_orders, i32 max_response_time, i32 > min_response_time, i32 avg_response_time, i32 max_response_size, i32 > min_response_size, i32 avg_response_size, i64 unique_referrers} > 11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for subqueries > 11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for destination > tables > 11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Completed getting MetaData > in Semantic Analysis > 11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for FS(2) > 11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for SEL(1) > 11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for TS(0) > 11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition_names : > db=default tbl=requests_stat_min > 11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : > db=default tbl=requests_stat_min > 11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string > time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders, > i64 unique_events, i64 start_orders, i32 max_response_time, i32 > min_response_time, i32 avg_response_time, i32 max_response_size, i32 > min_response_size, i32 avg_response_size, i64 unique_referrers} > 11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string > time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders, > i64 unique_events, i64 start_orders, i32 max_response_time, i32 > min_response_time, i32 > 11/04/27 14:15:42 INFO parse.SemanticAnalyzer: Completed plan generation > 11/04/27 14:15:42 INFO ql.Driver: Semantic Analysis Completed > 11/04/27 14:15:42 INFO ql.Driver: Starting command: select time from > requests_stat_min > Total MapReduce jobs = 1 > 11/04/27 14:15:42 INFO ql.Driver: Total MapReduce jobs = 1 > Launching Job 1 out of 1 > 11/04/27 14:15:42 INFO ql.Driver: Launching Job 1 out of 1 > Number of reduce tasks is set to 0 since there's no reduce operator > 11/04/27 14:15:42 INFO exec.ExecDriver: Number of reduce tasks is set to 0 > since there's no reduce operator > FAILED: Unknown exception : null > 11/04/27 14:15:42 ERROR ql.Driver: FAILED: Unknown exception : null > java.lang.NullPointerException > at java.util.Hashtable.put(Hashtable.java:394) > at java.util.Properties.setProperty(Properties.java:143) > at org.apache.hadoop.conf.Configuration.set(Configuration.java:460) > at org.apache.hadoop.hive.conf.HiveConf.setVar(HiveConf.java:293) > at > org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:505) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:100) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:572) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:452) > at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:314) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:302) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:287) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:186) > > > I am assuming this has to do with metastore but I cant figure out whats > wrong. The way we run query is via a remote client. Any help is greatly > appreciated! > > -- > Vipul Sharma > sharmavipul AT gmail DOT com > -- Vipul Sharma sharmavipul AT gmail DOT com --bcaec520ee77de61e904a1ef292f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable This got resolved. Some of the clients were not upgraded to CHD3 and were r= unning old version.

On Wed, Apr 27, 2011 = at 2:27 PM, vipul sharma <sharmavipulwork@gmail.com> wrote:
We have a hadoop/= hive cluster which is using cloudera's distribution. Metastore is store= d in mysql and all the relevant drivers are in classpath and in conf files.= While running queries on hive I am getting these errors

$ hive -hiveconf hive.root.logger=3DINFO,console
Hive history file= =3D/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt
11/04/27 14= :09:01 INFO exec.HiveHistory: Hive history file=3D/tmp/vipul/hive_job_log_v= ipul_201104271409_1310845475.txt
hive> select time from requests_stat_min;
11/04/27 14:15:38 INFO pars= e.ParseDriver: Parsing command: select time from requests_stat_min
11/04= /27 14:15:38 INFO parse.ParseDriver: Parse Completed
11/04/27 14:15:39 I= NFO parse.SemanticAnalyzer: Starting Semantic Analysis
11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Completed phase 1 of Semanti= c Analysis
11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Get metadata f= or source tables
11/04/27 14:15:39 INFO metastore.HiveMetaStore: 0: Open= ing raw store with implemenation class:org.apache.hadoop.hive.metastore.Obj= ectStore
11/04/27 14:15:39 INFO metastore.ObjectStore: ObjectStore, initialize calle= d
11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.j= dt.core" requires "org.eclipse.core.resources" but it cannot= be resolved.
11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.co= re" requires "org.eclipse.core.runtime" but it cannot be res= olved.
11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.ecli= pse.jdt.core" requires "org.eclipse.text" but it cannot be r= esolved.
11/04/27 14:15:40 INFO metastore.ObjectStore: Initialized ObjectStore
11= /04/27 14:15:40 INFO metastore.HiveMetaStore: 0: get_table : db=3Ddefault t= bl=3Drequests_stat_min
11/04/27 14:15:41 INFO hive.log: DDL: struct requ= ests_stat_min { string time, i64 request_count, i64 get_count, i64 post_cou= nt, i64 unique_orders, i64 unique_events, i64 start_orders, i32 max_respons= e_time, i32 min_response_time, i32 avg_response_time, i32 max_response_size= , i32 min_response_size, i32 avg_response_size, i64 unique_referrers}
11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for subqueries<= br>11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for destinat= ion tables
11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Completed gett= ing MetaData in Semantic Analysis
11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for FS(2)
11/04/27 = 14:15:41 INFO ppd.OpProcFactory: Processing for SEL(1)
11/04/27 14:15:41= INFO ppd.OpProcFactory: Processing for TS(0)
11/04/27 14:15:41 INFO met= astore.HiveMetaStore: 0: get_partition_names : db=3Ddefault tbl=3Drequests_= stat_min
11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddef= ault tbl=3Drequests_stat_min
11/04/27 14:15:41 INFO metastore.HiveMetaSt= ore: 0: get_partition : db=3Ddefault tbl=3Drequests_stat_min
11/04/27 14= :15:42 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddefault tbl=3D= requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddef= ault tbl=3Drequests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaSt= ore: 0: get_partition : db=3Ddefault tbl=3Drequests_stat_min
11/04/27 14= :15:42 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddefault tbl=3D= requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddef= ault tbl=3Drequests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaSt= ore: 0: get_partition : db=3Ddefault tbl=3Drequests_stat_min
11/04/27 14= :15:42 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddefault tbl=3D= requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition : db=3Ddef= ault tbl=3Drequests_stat_min
11/04/27 14:15:42 INFO hive.log: DDL: struc= t requests_stat_min { string time, i64 request_count, i64 get_count, i64 po= st_count, i64 unique_orders, i64 unique_events, i64 start_orders, i32 max_r= esponse_time, i32 min_response_time, i32 avg_response_time, i32 max_respons= e_size, i32 min_response_size, i32 avg_response_size, i64 unique_referrers}=
11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string tim= e, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders, i64= unique_events, i64 start_orders, i32 max_response_time, i32 min_response_t= ime, i32
11/04/27 14:15:42 INFO parse.SemanticAnalyzer: Completed plan generation11/04/27 14:15:42 INFO ql.Driver: Semantic Analysis Completed
11/04/27 = 14:15:42 INFO ql.Driver: Starting command: select time from requests_stat_m= in
Total MapReduce jobs =3D 1
11/04/27 14:15:42 INFO ql.Driver: Total MapRe= duce jobs =3D 1
Launching Job 1 out of 1
11/04/27 14:15:42 INFO ql.Dr= iver: Launching Job 1 out of 1
Number of reduce tasks is set to 0 since = there's no reduce operator
11/04/27 14:15:42 INFO exec.ExecDriver: Number of reduce tasks is set to 0 = since there's no reduce operator
FAILED: Unknown exception : null11/04/27 14:15:42 ERROR ql.Driver: FAILED: Unknown exception : null
java.lang.NullPointerException
=A0=A0=A0 at java.util.Hashtable.put(Hash= table.java:394)
=A0=A0=A0 at java.util.Properties.setProperty(Properties= .java:143)
=A0=A0=A0 at org.apache.hadoop.conf.Configuration.set(Configu= ration.java:460)
=A0=A0=A0 at org.apache.hadoop.hive.conf.HiveConf.setVar(HiveConf.java:293)=
=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriv= er.java:505)
=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.Task.executeTas= k(Task.java:100)
=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRu= nner.java:64)
=A0=A0=A0 at org.apache.hadoop.hive.ql.Driver.launchTask(D= river.java:572)
=A0=A0=A0 at org.apache.hadoop.hive.ql.Driver.execute(Dr= iver.java:452)
=A0=A0=A0 at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:314)=A0=A0=A0 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:302)
=A0= =A0=A0 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:12= 3)
=A0=A0=A0 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriv= er.java:181)
=A0=A0=A0 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:287)<= br>=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=
=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAc= cessorImpl.java:39)
=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMeth= odAccessorImpl.java:25)
=A0=A0=A0 at java.lang.reflect.Method.invoke(Met= hod.java:597)
=A0=A0=A0 at org.apache.hadoop.util.RunJar.main(RunJar.jav= a:186)


I am assuming this has to do with metastore but I cant figure out whats= wrong. The way we run query is via a remote client. Any help is greatly ap= preciated!

--
Vipul Sharma
sharmavipu= l AT gmail DOT com



--
Vipul Sharma
= sharmavipul AT gmail DOT com
--bcaec520ee77de61e904a1ef292f--