carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Liang Chen (JIRA)" <j...@apache.org>
Subject [jira] [Closed] (CARBONDATA-82) NullPointerException by ColumnSchemaDetailsWrapper.<init>(ColumnSchemaDetailsWrapper.java:75)
Date Wed, 09 Nov 2016 23:07:58 GMT

     [ https://issues.apache.org/jira/browse/CARBONDATA-82?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Liang Chen closed CARBONDATA-82.
--------------------------------
    Resolution: Invalid

> NullPointerException by ColumnSchemaDetailsWrapper.<init>(ColumnSchemaDetailsWrapper.java:75)
> ---------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-82
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-82
>             Project: CarbonData
>          Issue Type: Test
>          Components: spark-integration
>    Affects Versions: 0.2.0-incubating
>            Reporter: Shoujie Zhuo
>            Priority: Minor
>
> csv file:
> ss_sold_date_sk,ss_sold_time_sk,ss_item_sk,ss_customer_sk,ss_cdemo_sk,ss_hdemo_sk,ss_addr_sk,ss_store_sk,ss_promo_sk,ss_ticket_number,ss_quantity,ss_wholesale_cost,ss_list_price,ss_sales_price,ss_ext_discount_amt,ss_ext_sales_price,ss_ext_wholesale_cost,ss_ext_list_price,ss_ext_tax,ss_coupon_amt,ss_net_paid,ss_net_paid_inc_tax,ss_net_profit
> 2451813,65495,7649,79006,591617,3428,44839,10,5,1,79,11.41,18.71,2.80,99.54,221.20,901.39,1478.09,6.08,99.54,121.66,127.74,-779.73
> DDL:
> create table if not exists  store_sales
> (
>     ss_sold_date_sk           int,
>     ss_sold_time_sk           int,
>     ss_item_sk                int,
>     ss_customer_sk            int,
>     ss_cdemo_sk               int,
>     ss_hdemo_sk               int,
>     ss_addr_sk                int,
>     ss_store_sk               int,
>     ss_promo_sk               int,
>     ss_ticket_number          int,
>     ss_quantity               int,
>     ss_wholesale_cost         double,
>     ss_list_price             double,
>     ss_sales_price            double,
>     ss_ext_discount_amt       double,
>     ss_ext_sales_price        double,
>     ss_ext_wholesale_cost     double,
>     ss_ext_list_price         double,
>     ss_ext_tax                double,
>     ss_coupon_amt             double,
>     ss_net_paid               double,
>     ss_net_paid_inc_tax       double,
>     ss_net_profit             double
> )
> STORED BY 'org.apache.carbondata.format';
> Log:
> > LOAD DATA  inpath 'hdfs://holodesk01/user/carbon-spark-sql/tpcds/2/store_sales'
INTO table store_sales;
> INFO  20-07 13:43:39,249 - main Query [LOAD DATA  INPATH 'HDFS://HOLODESK01/USER/CARBON-SPARK-SQL/TPCDS/2/STORE_SALES'
INTO TABLE STORE_SALES]
> INFO  20-07 13:43:39,307 - Successfully able to get the table metadata file lock
> INFO  20-07 13:43:39,324 - main Initiating Direct Load for the Table : (tpcds_carbon_2.store_sales)
> INFO  20-07 13:43:39,331 - [Block Distribution]
> INFO  20-07 13:43:39,332 - totalInputSpaceConsumed : 778266079 , defaultParallelism :
24
> INFO  20-07 13:43:39,332 - mapreduce.input.fileinputformat.split.maxsize : 32427753
> INFO  20-07 13:43:39,392 - Block broadcast_8 stored as values in memory (estimated size
264.0 KB, free 573.6 KB)
> INFO  20-07 13:43:39,465 - Block broadcast_8_piece0 stored as bytes in memory (estimated
size 23.9 KB, free 597.4 KB)
> INFO  20-07 13:43:39,467 - Added broadcast_8_piece0 in memory on localhost:50762 (size:
23.9 KB, free: 511.4 MB)
> INFO  20-07 13:43:39,468 - Created broadcast 8 from NewHadoopRDD at CarbonTextFile.scala:45
> INFO  20-07 13:43:39,478 - Total input paths to process : 1
> INFO  20-07 13:43:39,493 - Starting job: take at CarbonCsvRelation.scala:175
> INFO  20-07 13:43:39,494 - Got job 5 (take at CarbonCsvRelation.scala:175) with 1 output
partitions
> INFO  20-07 13:43:39,494 - Final stage: ResultStage 6 (take at CarbonCsvRelation.scala:175)
> INFO  20-07 13:43:39,494 - Parents of final stage: List()
> INFO  20-07 13:43:39,495 - Missing parents: List()
> INFO  20-07 13:43:39,496 - Submitting ResultStage 6 (MapPartitionsRDD[23] at map at CarbonTextFile.scala:55),
which has no missing parents
> INFO  20-07 13:43:39,499 - Block broadcast_9 stored as values in memory (estimated size
2.6 KB, free 600.0 KB)
> INFO  20-07 13:43:39,511 - Block broadcast_9_piece0 stored as bytes in memory (estimated
size 1600.0 B, free 601.5 KB)
> INFO  20-07 13:43:39,512 - Added broadcast_9_piece0 in memory on localhost:50762 (size:
1600.0 B, free: 511.4 MB)
> INFO  20-07 13:43:39,513 - Created broadcast 9 from broadcast at DAGScheduler.scala:1006
> INFO  20-07 13:43:39,514 - Submitting 1 missing tasks from ResultStage 6 (MapPartitionsRDD[23]
at map at CarbonTextFile.scala:55)
> INFO  20-07 13:43:39,514 - Adding task set 6.0 with 1 tasks
> INFO  20-07 13:43:39,517 - Starting task 0.0 in stage 6.0 (TID 9, localhost, partition
0,ANY, 2302 bytes)
> INFO  20-07 13:43:39,518 - Running task 0.0 in stage 6.0 (TID 9)
> INFO  20-07 13:43:39,523 - Input split: hdfs://holodesk01/user/carbon-spark-sql/tpcds/2/store_sales/data-m-00001.csv:0+32427753
> INFO  20-07 13:43:39,545 - Finished task 0.0 in stage 6.0 (TID 9). 3580 bytes result
sent to driver
> INFO  20-07 13:43:39,558 - Finished task 0.0 in stage 6.0 (TID 9) in 42 ms on localhost
(1/1)
> INFO  20-07 13:43:39,558 - ResultStage 6 (take at CarbonCsvRelation.scala:175) finished
in 0.042 s
> INFO  20-07 13:43:39,558 - Removed TaskSet 6.0, whose tasks have all completed, from
pool 
> INFO  20-07 13:43:39,558 - Job 5 finished: take at CarbonCsvRelation.scala:175, took
0.065209 s
> INFO  20-07 13:43:39,558 - Finished stage: org.apache.spark.scheduler.StageInfo@6c7379d3
> INFO  20-07 13:43:39,561 - task runtime:(count: 1, mean: 42.000000, stdev: 0.000000,
max: 42.000000, min: 42.000000)
> INFO  20-07 13:43:39,561 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:39,561 - 	42.0 ms	42.0 ms	42.0 ms	42.0 ms	42.0 ms	42.0 ms	42.0 ms	42.0
ms	42.0 ms
> INFO  20-07 13:43:39,563 - task result size:(count: 1, mean: 3580.000000, stdev: 0.000000,
max: 3580.000000, min: 3580.000000)
> INFO  20-07 13:43:39,563 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:39,563 - 	3.5 KB	3.5 KB	3.5 KB	3.5 KB	3.5 KB	3.5 KB	3.5 KB	3.5 KB	3.5
KB
> INFO  20-07 13:43:39,564 - have no column need to generate global dictionary
> AUDIT 20-07 13:43:39,564 - [holodesk01][hdfs][Thread-1]Data load request has been received
for table tpcds_carbon_2.store_sales
> INFO  20-07 13:43:39,565 - executor (non-fetch) time pct: (count: 1, mean: 26.190476,
stdev: 0.000000, max: 26.190476, min: 26.190476)
> INFO  20-07 13:43:39,565 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:39,565 - 	26 %	26 %	26 %	26 %	26 %	26 %	26 %	26 %	26 %
> INFO  20-07 13:43:39,567 - other time pct: (count: 1, mean: 73.809524, stdev: 0.000000,
max: 73.809524, min: 73.809524)
> INFO  20-07 13:43:39,567 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:39,568 - 	74 %	74 %	74 %	74 %	74 %	74 %	74 %	74 %	74 %
> INFO  20-07 13:43:39,582 - main compaction need status is false
> INFO  20-07 13:43:39,583 - [Block Distribution]
> INFO  20-07 13:43:39,584 - totalInputSpaceConsumed : 778266079 , defaultParallelism :
24
> INFO  20-07 13:43:39,584 - mapreduce.input.fileinputformat.split.maxsize : 32427753
> INFO  20-07 13:43:39,586 - Total input paths to process : 1
> INFO  20-07 13:43:39,599 - Total no of blocks : 24, No.of Nodes : 4
> INFO  20-07 13:43:39,599 - #Node: holodesk02 no.of.blocks: 6
> #Node: holodesk01 no.of.blocks: 6
> #Node: holodesk04 no.of.blocks: 6
> #Node: holodesk03 no.of.blocks: 6
> INFO  20-07 13:43:40,605 - Starting job: collect at CarbonDataRDDFactory.scala:717
> INFO  20-07 13:43:40,606 - Got job 6 (collect at CarbonDataRDDFactory.scala:717) with
4 output partitions
> INFO  20-07 13:43:40,606 - Final stage: ResultStage 7 (collect at CarbonDataRDDFactory.scala:717)
> INFO  20-07 13:43:40,607 - Parents of final stage: List()
> INFO  20-07 13:43:40,607 - Missing parents: List()
> INFO  20-07 13:43:40,607 - Submitting ResultStage 7 (CarbonDataLoadRDD[24] at RDD at
CarbonDataLoadRDD.scala:94), which has no missing parents
> INFO  20-07 13:43:40,608 - Prefered Location for split : holodesk02
> INFO  20-07 13:43:40,608 - Prefered Location for split : holodesk01
> INFO  20-07 13:43:40,608 - Prefered Location for split : holodesk04
> INFO  20-07 13:43:40,608 - Prefered Location for split : holodesk03
> INFO  20-07 13:43:40,613 - Block broadcast_10 stored as values in memory (estimated size
15.8 KB, free 617.3 KB)
> INFO  20-07 13:43:40,625 - Block broadcast_10_piece0 stored as bytes in memory (estimated
size 5.9 KB, free 623.2 KB)
> INFO  20-07 13:43:40,627 - Added broadcast_10_piece0 in memory on localhost:50762 (size:
5.9 KB, free: 511.4 MB)
> INFO  20-07 13:43:40,627 - Created broadcast 10 from broadcast at DAGScheduler.scala:1006
> INFO  20-07 13:43:40,628 - Submitting 4 missing tasks from ResultStage 7 (CarbonDataLoadRDD[24]
at RDD at CarbonDataLoadRDD.scala:94)
> INFO  20-07 13:43:40,628 - Adding task set 7.0 with 4 tasks
> INFO  20-07 13:43:40,631 - Starting task 0.0 in stage 7.0 (TID 10, localhost, partition
0,ANY, 2892 bytes)
> INFO  20-07 13:43:40,632 - Starting task 1.0 in stage 7.0 (TID 11, localhost, partition
1,ANY, 2892 bytes)
> INFO  20-07 13:43:40,633 - Starting task 2.0 in stage 7.0 (TID 12, localhost, partition
2,ANY, 2892 bytes)
> INFO  20-07 13:43:40,634 - Starting task 3.0 in stage 7.0 (TID 13, localhost, partition
3,ANY, 2892 bytes)
> INFO  20-07 13:43:40,634 - Running task 0.0 in stage 7.0 (TID 10)
> INFO  20-07 13:43:40,635 - Running task 1.0 in stage 7.0 (TID 11)
> INFO  20-07 13:43:40,635 - Running task 2.0 in stage 7.0 (TID 12)
> INFO  20-07 13:43:40,635 - Running task 3.0 in stage 7.0 (TID 13)
> INFO  20-07 13:43:40,648 - Input split: holodesk04
> INFO  20-07 13:43:40,648 - The Block Count in this node :6
> INFO  20-07 13:43:40,649 - Input split: holodesk01
> INFO  20-07 13:43:40,649 - The Block Count in this node :6
> INFO  20-07 13:43:40,649 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]
************* Is Columnar Storagetrue
> INFO  20-07 13:43:40,649 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]
************* Is Columnar Storagetrue
> INFO  20-07 13:43:40,649 - Input split: holodesk03
> INFO  20-07 13:43:40,650 - The Block Count in this node :6
> INFO  20-07 13:43:40,650 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]
************* Is Columnar Storagetrue
> INFO  20-07 13:43:40,649 - Input split: holodesk02
> INFO  20-07 13:43:40,651 - The Block Count in this node :6
> INFO  20-07 13:43:40,651 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]
************* Is Columnar Storagetrue
> INFO  20-07 13:43:40,701 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]
Kettle environment initialized
> INFO  20-07 13:43:40,706 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]
Kettle environment initialized
> INFO  20-07 13:43:40,707 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]
Kettle environment initialized
> INFO  20-07 13:43:40,713 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]
Kettle environment initialized
> INFO  20-07 13:43:40,751 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]
** Using csv file **
> INFO  20-07 13:43:40,756 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]
** Using csv file **
> INFO  20-07 13:43:40,764 - store_sales: Graph - CSV Input *****************Started all
csv reading***********
> INFO  20-07 13:43:40,774 - [pool-40-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-40-thread-1]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,788 - [pool-40-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-40-thread-2]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,795 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]
Graph execution is started /mnt/disk1/spark/438978154880668/3/etl/tpcds_carbon_2/store_sales/0/3/store_sales.ktr
> INFO  20-07 13:43:40,798 - store_sales: Graph - CSV Input *****************Started all
csv reading***********
> INFO  20-07 13:43:40,809 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]
Graph execution is started /mnt/disk1/spark/438978153902729/1/etl/tpcds_carbon_2/store_sales/0/1/store_sales.ktr
> INFO  20-07 13:43:40,813 - [pool-41-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-41-thread-1]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,814 - [pool-41-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-41-thread-2]
*****************started csv reading by thread***********
> ERROR 20-07 13:43:40,819 - [store_sales: Graph - Carbon Surrogate Key Generator][partitionID:0]

> java.lang.NullPointerException
> 	at org.carbondata.processing.schema.metadata.ColumnSchemaDetailsWrapper.<init>(ColumnSchemaDetailsWrapper.java:75)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenMeta.initialize(CarbonCSVBasedSeqGenMeta.java:787)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenStep.processRow(CarbonCSVBasedSeqGenStep.java:294)
> 	at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:40,819 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Record Processed For table: store_sales
> INFO  20-07 13:43:40,819 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Number of Records was Zero
> INFO  20-07 13:43:40,819 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Summary: Carbon Sort Key Step: Read: 0: Write: 0
> INFO  20-07 13:43:40,820 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Record Procerssed For table: store_sales
> INFO  20-07 13:43:40,820 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Summary: Carbon Slice Merger Step: Read: 0: Write: 0
> INFO  20-07 13:43:40,820 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]
** Using csv file **
> ERROR 20-07 13:43:40,821 - [store_sales: Graph - MDKeyGenstore_sales][partitionID:0]
Local data load folder location does not exist: /mnt/disk1/spark/438978154880668/3/tpcds_carbon_2/store_sales/Fact/Part0/Segment_0/3
> INFO  20-07 13:43:40,841 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]
** Using csv file **
> INFO  20-07 13:43:40,854 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]
Graph execution is started /mnt/disk2/spark/438978155737218/0/etl/tpcds_carbon_2/store_sales/0/0/store_sales.ktr
> ERROR 20-07 13:43:40,854 - [store_sales: Graph - Carbon Surrogate Key Generator][partitionID:0]

> java.lang.NullPointerException
> 	at org.carbondata.processing.schema.metadata.ColumnSchemaDetailsWrapper.<init>(ColumnSchemaDetailsWrapper.java:75)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenMeta.initialize(CarbonCSVBasedSeqGenMeta.java:787)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenStep.processRow(CarbonCSVBasedSeqGenStep.java:294)
> 	at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
> 	at java.lang.Thread.run(Thread.java:745)
> ERROR 20-07 13:43:40,855 - [store_sales: Graph - MDKeyGenstore_sales][partitionID:0]
Local data load folder location does not exist: /mnt/disk1/spark/438978153902729/1/tpcds_carbon_2/store_sales/Fact/Part0/Segment_0/1
> INFO  20-07 13:43:40,855 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Record Processed For table: store_sales
> INFO  20-07 13:43:40,855 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Number of Records was Zero
> INFO  20-07 13:43:40,855 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Summary: Carbon Sort Key Step: Read: 0: Write: 0
> INFO  20-07 13:43:40,856 - store_sales: Graph - CSV Input *****************Started all
csv reading***********
> INFO  20-07 13:43:40,857 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Record Procerssed For table: store_sales
> INFO  20-07 13:43:40,857 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Summary: Carbon Slice Merger Step: Read: 0: Write: 0
> INFO  20-07 13:43:40,867 - [pool-42-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-42-thread-2]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,869 - [pool-42-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-42-thread-1]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,872 - store_sales: Graph - CSV Input *****************Started all
csv reading***********
> INFO  20-07 13:43:40,878 - [pool-43-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-43-thread-1]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,881 - [pool-43-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-43-thread-2]
*****************started csv reading by thread***********
> INFO  20-07 13:43:40,886 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]
Graph execution is started /mnt/disk1/spark/438978153678637/2/etl/tpcds_carbon_2/store_sales/0/2/store_sales.ktr
> ERROR 20-07 13:43:40,898 - [store_sales: Graph - Carbon Surrogate Key Generator][partitionID:0]

> java.lang.NullPointerException
> 	at org.carbondata.processing.schema.metadata.ColumnSchemaDetailsWrapper.<init>(ColumnSchemaDetailsWrapper.java:75)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenMeta.initialize(CarbonCSVBasedSeqGenMeta.java:787)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenStep.processRow(CarbonCSVBasedSeqGenStep.java:294)
> 	at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:40,899 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Record Procerssed For table: store_sales
> ERROR 20-07 13:43:40,899 - [store_sales: Graph - MDKeyGenstore_sales][partitionID:0]
Local data load folder location does not exist: /mnt/disk2/spark/438978155737218/0/tpcds_carbon_2/store_sales/Fact/Part0/Segment_0/0
> INFO  20-07 13:43:40,899 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Summary: Carbon Slice Merger Step: Read: 0: Write: 0
> INFO  20-07 13:43:40,899 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Record Processed For table: store_sales
> INFO  20-07 13:43:40,899 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Number of Records was Zero
> INFO  20-07 13:43:40,900 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Summary: Carbon Sort Key Step: Read: 0: Write: 0
> ERROR 20-07 13:43:40,904 - [store_sales: Graph - Carbon Surrogate Key Generator][partitionID:0]

> java.lang.NullPointerException
> 	at org.carbondata.processing.schema.metadata.ColumnSchemaDetailsWrapper.<init>(ColumnSchemaDetailsWrapper.java:75)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenMeta.initialize(CarbonCSVBasedSeqGenMeta.java:787)
> 	at org.carbondata.processing.surrogatekeysgenerator.csvbased.CarbonCSVBasedSeqGenStep.processRow(CarbonCSVBasedSeqGenStep.java:294)
> 	at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:40,906 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Record Processed For table: store_sales
> INFO  20-07 13:43:40,906 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Record Procerssed For table: store_sales
> ERROR 20-07 13:43:40,907 - [store_sales: Graph - MDKeyGenstore_sales][partitionID:0]
Local data load folder location does not exist: /mnt/disk1/spark/438978153678637/2/tpcds_carbon_2/store_sales/Fact/Part0/Segment_0/2
> INFO  20-07 13:43:40,907 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Number of Records was Zero
> INFO  20-07 13:43:40,907 - [store_sales: Graph - Carbon Slice Mergerstore_sales][partitionID:sales]
Summary: Carbon Slice Merger Step: Read: 0: Write: 0
> INFO  20-07 13:43:40,907 - [store_sales: Graph - Sort Key: Sort keysstore_sales][partitionID:0]
Summary: Carbon Sort Key Step: Read: 0: Write: 0
> INFO  20-07 13:43:41,464 - Cleaned accumulator 18
> INFO  20-07 13:43:41,492 - Removed broadcast_8_piece0 on localhost:50762 in memory (size:
23.9 KB, free: 511.5 MB)
> INFO  20-07 13:43:41,497 - Removed broadcast_7_piece0 on localhost:50762 in memory (size:
23.9 KB, free: 511.5 MB)
> INFO  20-07 13:43:41,499 - Removed broadcast_9_piece0 on localhost:50762 in memory (size:
1600.0 B, free: 511.5 MB)
> INFO  20-07 13:43:49,599 - [pool-41-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-41-thread-2]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:49,855 - [pool-41-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-41-thread-1]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:49,957 - store_sales: Graph - CSV Input *****************Completed
all csv reading***********
> INFO  20-07 13:43:49,957 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]
Graph execution is finished.
> ERROR 20-07 13:43:49,957 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]
Graph Execution had errors
> ERROR 20-07 13:43:49,957 - [Executor task launch worker-6][partitionID:tpcds_carbon_2_store_sales_6302551d-dc77-4440-a26e-cbafb9d22c8c]

> org.carbondata.processing.etl.DataLoadingException: Internal Errors
> 	at org.carbondata.processing.csvload.DataGraphExecuter.execute(DataGraphExecuter.java:253)
> 	at org.carbondata.processing.csvload.DataGraphExecuter.executeGraph(DataGraphExecuter.java:168)
> 	at org.carbondata.spark.load.CarbonLoaderUtil.executeGraph(CarbonLoaderUtil.java:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD$$anon$1.<init>(CarbonDataLoadRDD.scala:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD.compute(CarbonDataLoadRDD.scala:148)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:89)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:49,958 - DataLoad failure
> INFO  20-07 13:43:49,969 - Finished task 1.0 in stage 7.0 (TID 11). 952 bytes result
sent to driver
> INFO  20-07 13:43:49,982 - Finished task 1.0 in stage 7.0 (TID 11) in 9350 ms on localhost
(1/4)
> INFO  20-07 13:43:50,482 - [pool-40-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-40-thread-2]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:50,943 - [pool-42-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-42-thread-2]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:51,270 - [pool-40-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-40-thread-1]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:51,408 - store_sales: Graph - CSV Input *****************Completed
all csv reading***********
> INFO  20-07 13:43:51,408 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]
Graph execution is finished.
> ERROR 20-07 13:43:51,409 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]
Graph Execution had errors
> ERROR 20-07 13:43:51,409 - [Executor task launch worker-8][partitionID:tpcds_carbon_2_store_sales_94282d67-f4de-42dd-b61c-af8483cf3d21]

> org.carbondata.processing.etl.DataLoadingException: Internal Errors
> 	at org.carbondata.processing.csvload.DataGraphExecuter.execute(DataGraphExecuter.java:253)
> 	at org.carbondata.processing.csvload.DataGraphExecuter.executeGraph(DataGraphExecuter.java:168)
> 	at org.carbondata.spark.load.CarbonLoaderUtil.executeGraph(CarbonLoaderUtil.java:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD$$anon$1.<init>(CarbonDataLoadRDD.scala:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD.compute(CarbonDataLoadRDD.scala:148)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:89)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:51,409 - DataLoad failure
> INFO  20-07 13:43:51,420 - Finished task 3.0 in stage 7.0 (TID 13). 952 bytes result
sent to driver
> INFO  20-07 13:43:51,434 - Finished task 3.0 in stage 7.0 (TID 13) in 10800 ms on localhost
(2/4)
> INFO  20-07 13:43:51,435 - [pool-43-thread-2][partitionID:PROCESS_BLOCKS;queryID:pool-43-thread-2]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:52,466 - [pool-42-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-42-thread-1]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:52,588 - store_sales: Graph - CSV Input *****************Completed
all csv reading***********
> INFO  20-07 13:43:52,590 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]
Graph execution is finished.
> ERROR 20-07 13:43:52,590 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]
Graph Execution had errors
> ERROR 20-07 13:43:52,590 - [Executor task launch worker-5][partitionID:tpcds_carbon_2_store_sales_3e4ba964-bcdc-4196-8d81-c590f2c67605]

> org.carbondata.processing.etl.DataLoadingException: Internal Errors
> 	at org.carbondata.processing.csvload.DataGraphExecuter.execute(DataGraphExecuter.java:253)
> 	at org.carbondata.processing.csvload.DataGraphExecuter.executeGraph(DataGraphExecuter.java:168)
> 	at org.carbondata.spark.load.CarbonLoaderUtil.executeGraph(CarbonLoaderUtil.java:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD$$anon$1.<init>(CarbonDataLoadRDD.scala:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD.compute(CarbonDataLoadRDD.scala:148)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:89)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:52,591 - DataLoad failure
> INFO  20-07 13:43:52,603 - Finished task 0.0 in stage 7.0 (TID 10). 952 bytes result
sent to driver
> INFO  20-07 13:43:52,614 - Finished task 0.0 in stage 7.0 (TID 10) in 11984 ms on localhost
(3/4)
> INFO  20-07 13:43:52,638 - [pool-43-thread-1][partitionID:PROCESS_BLOCKS;queryID:pool-43-thread-1]
*****************Completed csv reading by thread***********
> INFO  20-07 13:43:52,824 - store_sales: Graph - CSV Input *****************Completed
all csv reading***********
> INFO  20-07 13:43:52,824 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]
Graph execution is finished.
> ERROR 20-07 13:43:52,825 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]
Graph Execution had errors
> ERROR 20-07 13:43:52,825 - [Executor task launch worker-7][partitionID:tpcds_carbon_2_store_sales_00be80d1-400a-425d-9c7f-4acf3b3a7bb3]

> org.carbondata.processing.etl.DataLoadingException: Internal Errors
> 	at org.carbondata.processing.csvload.DataGraphExecuter.execute(DataGraphExecuter.java:253)
> 	at org.carbondata.processing.csvload.DataGraphExecuter.executeGraph(DataGraphExecuter.java:168)
> 	at org.carbondata.spark.load.CarbonLoaderUtil.executeGraph(CarbonLoaderUtil.java:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD$$anon$1.<init>(CarbonDataLoadRDD.scala:189)
> 	at org.carbondata.spark.rdd.CarbonDataLoadRDD.compute(CarbonDataLoadRDD.scala:148)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:89)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> INFO  20-07 13:43:52,825 - DataLoad failure
> INFO  20-07 13:43:52,837 - Finished task 2.0 in stage 7.0 (TID 12). 952 bytes result
sent to driver
> INFO  20-07 13:43:52,849 - Finished task 2.0 in stage 7.0 (TID 12) in 12216 ms on localhost
(4/4)
> INFO  20-07 13:43:52,849 - ResultStage 7 (collect at CarbonDataRDDFactory.scala:717)
finished in 12.219 s
> INFO  20-07 13:43:52,849 - Removed TaskSet 7.0, whose tasks have all completed, from
pool 
> INFO  20-07 13:43:52,849 - Finished stage: org.apache.spark.scheduler.StageInfo@46ffcf8b
> INFO  20-07 13:43:52,849 - Job 6 finished: collect at CarbonDataRDDFactory.scala:717,
took 12.244086 s
> INFO  20-07 13:43:52,850 - ********starting clean up**********
> INFO  20-07 13:43:52,851 - task runtime:(count: 4, mean: 11087.500000, stdev: 1137.847419,
max: 12216.000000, min: 9350.000000)
> INFO  20-07 13:43:52,851 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:52,851 - 	9.4 s	9.4 s	9.4 s	10.8 s	12.0 s	12.2 s	12.2 s	12.2 s	12.2
s
> INFO  20-07 13:43:52,853 - task result size:(count: 4, mean: 952.000000, stdev: 0.000000,
max: 952.000000, min: 952.000000)
> INFO  20-07 13:43:52,853 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:52,853 - 	952.0 B	952.0 B	952.0 B	952.0 B	952.0 B	952.0 B	952.0 B	952.0
B	952.0 B
> INFO  20-07 13:43:52,855 - executor (non-fetch) time pct: (count: 4, mean: 99.639701,
stdev: 0.042276, max: 99.688933, min: 99.572193)
> INFO  20-07 13:43:52,855 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:52,855 - 	100 %	100 %	100 %	100 %	100 %	100 %	100 %	100 %	100 %
> INFO  20-07 13:43:52,857 - other time pct: (count: 4, mean: 0.360299, stdev: 0.042276,
max: 0.427807, min: 0.311067)
> INFO  20-07 13:43:52,857 - 	0%	5%	10%	25%	50%	75%	90%	95%	100%
> INFO  20-07 13:43:52,857 - 	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %	 0 %
> INFO  20-07 13:43:53,079 - ********clean up done**********
> AUDIT 20-07 13:43:53,079 - [holodesk01][hdfs][Thread-1]Data load is failed for tpcds_carbon_2.store_sales
> WARN  20-07 13:43:53,080 - Unable to write load metadata file
> ERROR 20-07 13:43:53,080 - main 
> java.lang.Exception: Dataload failure
> 	at org.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:779)
> 	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1146)
> 	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
> 	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
> 	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
> 	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> 	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> 	at org.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
> 	at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:109)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:311)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
> 	at org.apache.spark.sql.hive.cli.CarbonSQLCLIDriver$.main(CarbonSQLCLIDriver.scala:40)
> 	at org.apache.spark.sql.hive.cli.CarbonSQLCLIDriver.main(CarbonSQLCLIDriver.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> AUDIT 20-07 13:43:53,081 - [holodesk01][hdfs][Thread-1]Dataload failure for tpcds_carbon_2.store_sales.
Please check the logs
> INFO  20-07 13:43:53,083 - Table MetaData Unlocked Successfully after data load
> ERROR 20-07 13:43:53,083 - Failed in [LOAD DATA  inpath 'hdfs://holodesk01/user/carbon-spark-sql/tpcds/2/store_sales'
INTO table store_sales]
> java.lang.Exception: Dataload failure
> 	at org.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:779)
> 	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1146)
> 	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
> 	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
> 	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
> 	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> 	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> 	at org.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
> 	at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:109)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:311)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
> 	at org.apache.spark.sql.hive.cli.CarbonSQLCLIDriver$.main(CarbonSQLCLIDriver.scala:40)
> 	at org.apache.spark.sql.hive.cli.CarbonSQLCLIDriver.main(CarbonSQLCLIDriver.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> java.lang.Exception: Dataload failure
> 	at org.carbondata.spark.rdd.CarbonDataRDDFactory$.loadCarbonData(CarbonDataRDDFactory.scala:779)
> 	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1146)
> 	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
> 	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
> 	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
> 	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> 	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> 	at org.carbondata.spark.rdd.CarbonDataFrameRDD.<init>(CarbonDataFrameRDD.scala:23)
> 	at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:109)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:311)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
> 	at org.apache.spark.sql.hive.cli.CarbonSQLCLIDriver$.main(CarbonSQLCLIDriver.scala:40)
> 	at org.apache.spark.sql.hive.cli.CarbonSQLCLIDriver.main(CarbonSQLCLIDriver.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Mime
View raw message