carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "SWATI RAO (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CARBONDATA-1042) Delete Opertation Failed in automation
Date Wed, 10 May 2017 16:03:04 GMT

    [ https://issues.apache.org/jira/browse/CARBONDATA-1042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004917#comment-16004917
] 

SWATI RAO commented on CARBONDATA-1042:
---------------------------------------

[~ravi.pesala] : manually it is working fine. But when we execute it in automation, it throws
the above error.. let's close it for now. This problem may be because of automation issue
not in carbondata.

> Delete Opertation Failed in automation 
> ---------------------------------------
>
>                 Key: CARBONDATA-1042
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1042
>             Project: CarbonData
>          Issue Type: Bug
>    Affects Versions: 1.0.0-incubating
>         Environment: Spark 1.6
>            Reporter: SWATI RAO
>            Priority: Trivial
>         Attachments: 2000_UniqData.csv
>
>
> Steps to Reproduce :
> Create Table:
> CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp,
DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10),
DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1
int) STORED BY 'org.apache.carbondata.format'
> Load Data: 
> LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/uniqdata/2000_UniqData.csv' into table uniqdata
OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1')
> Delete Query :
> delete from uniqdata where doj='1970-01-15 02:00:03' and dob='1970-01-15 01:00:03' or
INTEGER_COLUMN1=15
> Result In Automation : 
> Delete_193,FAIL,Delete data operation is failed. Job aborted due to stage failure: Task
0 in stage 503.0 failed 4 times, most recent failure: Lost task 0.3 in stage 503.0 (TID 768,
hadoop-master): java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException:
0
> 	at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.updateScanner(AbstractDataBlockIterator.java:136)
> 	at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:50)
> 	at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
> 	at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:50)
> 	at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:41)
> 	at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:31)
> 	at org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.<init>(ChunkRowIterator.java:41)
> 	at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:79)
> 	at org.apache.carbondata.spark.rdd.CarbonScanRDD.compute(CarbonScanRDD.scala:204)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:89)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException:
0
> 	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> 	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> 	at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.getNextScannedResult(AbstractDataBlockIterator.java:146)
> 	at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.updateScanner(AbstractDataBlockIterator.java:124)
> 	... 29 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
> 	at org.apache.carbondata.core.util.BitSetGroup.getBitSet(BitSetGroup.java:40)
> 	at org.apache.carbondata.core.util.BitSetGroup.or(BitSetGroup.java:68)
> 	at org.apache.carbondata.core.scan.filter.executer.OrFilterExecuterImpl.applyFilter(OrFilterExecuterImpl.java:40)
> 	at org.apache.carbondata.core.scan.scanner.impl.FilterScanner.fillScannedResult(FilterScanner.java:147)
> 	at org.apache.carbondata.core.scan.scanner.impl.FilterScanner.scanBlocklet(FilterScanner.java:92)
> 	at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator$1.call(AbstractDataBlockIterator.java:189)
> 	at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator$1.call(AbstractDataBlockIterator.java:176)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	... 3 more
> Driver stacktrace:
> But when we executed the same query manually, it is working fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message