spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yin Huai <huaiyin....@gmail.com>
Subject Re: Problem Accessing Hive Table from hiveContext
Date Mon, 01 Sep 2014 13:36:13 GMT
Hello Igor,

Although Decimal is supported, Hive 0.12 does not support user definable
precision and scale (it was introduced in Hive 0.13).

Thanks,

Yin


On Sat, Aug 30, 2014 at 1:50 AM, Zitser, Igor <igor.zitser@citi.com> wrote:

> Hi All,
> New to spark and using Spark 1.0.2 and hive 0.12.
>
> If hive table created as test_datatypes(testbigint bigint, ss bigint )
>  "select * from test_datatypes" from spark works fine.
>
> For "create table test_datatypes(testbigint bigint, testdec decimal(5,2) )"
>
> scala> val dataTypes=hiveContext.hql("select * from test_datatypes")
> 14/08/28 21:18:44 INFO parse.ParseDriver: Parsing command: select * from
> test_datatypes
> 14/08/28 21:18:44 INFO parse.ParseDriver: Parse Completed
> 14/08/28 21:18:44 INFO analysis.Analyzer: Max iterations (2) reached for
> batch MultiInstanceRelations
> 14/08/28 21:18:44 INFO analysis.Analyzer: Max iterations (2) reached for
> batch CaseInsensitiveAttributeReferences
> java.lang.IllegalArgumentException: Error: ',', ':', or ';' expected at
> position 14 from 'bigint:decimal(5,2)' [0:bigint, 6::, 7:decimal, 14:(,
> 15:5, 16:,, 17:2, 18:)]
>         at
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseTypeInfos(TypeInfoUtils.java:312)
>         at
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.getTypeInfosFromTypeString(TypeInfoUtils.java:716)
>         at
> org.apache.hadoop.hive.serde2.lazy.LazyUtils.extractColumnInfo(LazyUtils.java:364)
>         at
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initSerdeParams(LazySimpleSerDe.java:288)
>         at
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:187)
>         at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:218)
>         at
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:272)
>         at
> org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:175)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:991)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:924)
>         at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:58)
>         at org.apache.spark.sql.hive.HiveContext$$anon$2.org
> $apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:143)
>         at
> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:122)
>         at
> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:122)
>         at scala.Option.getOrElse(Option.scala:120)
>         at
> org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:122)
>         at
> org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:149)
>         at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$2.applyOrElse(Analyzer.scala:83)
>         at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$2.applyOrElse(Analyzer.scala:81)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:165)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
>         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>         at scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
>         at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>         at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>         at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>         at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>         at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)
>         at
> org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:156)
>
>
> Same exception happens using table as "create table
> test_datatypes(testbigint bigint, testdate date )" .
>
> Thanks, Igor.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Mime
View raw message