spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jack Hu (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-6180) Error logged into log4j when use the HiveMetastoreCatalog::tableExists
Date Mon, 23 Mar 2015 03:17:10 GMT

     [ https://issues.apache.org/jira/browse/SPARK-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jack Hu resolved SPARK-6180.
----------------------------
       Resolution: Fixed
    Fix Version/s: 1.3.1

Fixed in this pull https://github.com/apache/spark/pull/4365

> Error logged into log4j when use the HiveMetastoreCatalog::tableExists
> ----------------------------------------------------------------------
>
>                 Key: SPARK-6180
>                 URL: https://issues.apache.org/jira/browse/SPARK-6180
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.2.1
>            Reporter: Jack Hu
>            Priority: Minor
>              Labels: Hive, HiveMetastoreCatalog, spark, starter
>             Fix For: 1.3.1
>
>
> When using {{HiveMetastoreCatalog.tableExists}} to check a table that does not exist
in hive store, there is one error message like this logged into log file, the function returns
{{false}} as desired. 
> To avoid this error log, one way is to use {{Hive.getTable(databaseName, tblName, false)}}
instead of {{Hive.getTable(databaseName, tblName)}}
> {quote}
> 15/02/13 17:24:34 [Sql Query events] ERROR hive.ql.metadata.Hive: NoSuchObjectException(message:default.demotable
table not found)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1560)
> 	at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
> 	at com.sun.proxy.$Proxy15.get_table(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:997)
> 	at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
> 	at com.sun.proxy.$Proxy16.getTable(Unknown Source)
> 	at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:976)
> 	at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:950)
> 	at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:70)
> 	at org.apache.spark.sql.hive.HiveContext$$anon$1.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:253)
> 	at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
> 	at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:141)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:141)
> 	at org.apache.spark.sql.hive.HiveContext$$anon$1.lookupRelation(HiveContext.scala:253)
> 	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:143)
> 	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$5.applyOrElse(Analyzer.scala:138)
> 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144)
> 	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:162)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> 	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
> 	at scala.collection.AbstractIterator.to(Iterator.scala:1157)
> 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
> 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
> 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:191)
> 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:147)
> 	at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135)
> 	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:138)
> 	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:137)
> 	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
> 	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
> 	at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
> 	at scala.collection.immutable.List.foldLeft(List.scala:84)
> 	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
> 	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
> 	at scala.collection.immutable.List.foreach(List.scala:318)
> 	at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:412)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:412)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:413)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:413)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:418)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:416)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:422)
> 	at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:422)
> 	at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:444)
>        at com.vitria.poc.SqlProductDemo$$anon$1$$anonfun$run$1.apply$mcV$sp(SqlProductDemo.scala:57)
> 	at com.vitria.poc.SqlProductDemo$$anon$1$$anonfun$run$1.apply(SqlProductDemo.scala:57)
> 	at com.vitria.poc.SqlProductDemo$$anon$1$$anonfun$run$1.apply(SqlProductDemo.scala:57)
> 	at scala.util.Try$.apply(Try.scala:161)
> 	at com.vitria.poc.SqlProductDemo$$anon$1.run(SqlProductDemo.scala:57)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message