carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "anubhav tarar (JIRA)" <j...@apache.org>
Subject [jira] [Created] (CARBONDATA-1048) Update Hive Guide
Date Fri, 12 May 2017 05:47:04 GMT
anubhav tarar created CARBONDATA-1048:
-----------------------------------------

             Summary: Update Hive Guide 
                 Key: CARBONDATA-1048
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1048
             Project: CarbonData
          Issue Type: Improvement
          Components: hive-integration
    Affects Versions: 1.0.0-incubating
         Environment: hive 1.2.1,spark2.1
            Reporter: anubhav tarar
            Assignee: anubhav tarar
            Priority: Trivial


Decimal data type raises exception while selecting the data from the table in hive with steps
given in hive guide

1) In Spark Shell :
a) Create Table -
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.CarbonSession._
val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://localhost:54310/opt/data")
scala> carbon.sql(""" create table testHive1(id int,name string,scale decimal(10,0),country
string,salary double) stored by'carbondata' """).show
b) Load Data - 
scala> carbon.sql(""" load data inpath 'hdfs://localhost:54310/Files/testHive1.csv' into
table testHive1 """ ).show
2) In Hive :
a) Add Jars - 
add jar /home/neha/incubator-carbondata/assembly/target/scala-2.11/carbondata_2.11-1.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar;
add jar /opt/spark-2.1.0-bin-hadoop2.7/jars/spark-catalyst_2.11-2.1.0.jar;
add jar /home/neha/incubator-carbondata/integration/hive/carbondata-hive-1.1.0-incubating-SNAPSHOT.jar;
b) Create Table -
create table testHive1(id int,name string,scale decimal(10,0),country             	string,salary
double);
c) Alter location - 
hive> alter table testHive1 set LOCATION 'hdfs://localhost:54310/opt/data/default/testhive1'
;
d) Set Properties - 
set hive.mapred.supports.subdirectories=true;
set mapreduce.input.fileinputformat.input.dir.recursive=true;
d) Alter FileFormat -
alter table testHive1 set FILEFORMAT
INPUTFORMAT "org.apache.carbondata.hive.MapredCarbonInputFormat"
OUTPUTFORMAT "org.apache.carbondata.hive.MapredCarbonOutputFormat"
SERDE "org.apache.carbondata.hive.CarbonHiveSerDe";

hive> ADD JAR /home/hduser/spark-2.1.0-bin-hadoop2.7/jars/spark-catalyst_2.11-2.1.0.jar;
Added [/home/hduser/spark-2.1.0-bin-hadoop2.7/jars/spark-catalyst_2.11-2.1.0.jar] to class
path
Added resources: [/home/hduser/spark-2.1.0-bin-hadoop2.7/jars/spark-catalyst_2.11-2.1.0.jar]

f) Execute Queries - 
select * from testHive1;
3) Query :
hive> select * from testHive1;

Exception in thread "[main][partitionID:hive25;queryID:4537623368167]" java.lang.NoClassDefFoundError:
scala/math/Ordered
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

when i add the scala-library and scala-reflect jar it works fine

hive> ADD JAR /home/knoldus/Videos/scala-library-2.11.1.jar;
Added [/home/knoldus/Videos/scala-library-2.11.1.jar] to class path
Added resources: [/home/knoldus/Videos/scala-library-2.11.1.jar]
hive> ADD JAR /home/knoldus/Videos/scala-reflect-2.11.1.jar;
Added [/home/knoldus/Videos/scala-reflect-2.11.1.jar] to class path
Added resources: [/home/knoldus/Videos/scala-reflect-2.11.1.jar]

fired the query again

hive> select * from testHive1;
OK
2	runlin	2	china	33000.2
1	yuhai	2	china	33000.1

so its better to mention about adding these jar in hive






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message