carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "anubhav tarar (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CARBONDATA-1773) [Streaming]carbon StreamWriter task is failled with ClassCastException
Date Wed, 22 Nov 2017 10:06:00 GMT

    [ https://issues.apache.org/jira/browse/CARBONDATA-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16262232#comment-16262232
] 

anubhav tarar commented on CARBONDATA-1773:
-------------------------------------------

@babulal i am unable to reproduce this issue how to reproduce it?

> [Streaming]carbon StreamWriter task is failled with ClassCastException
> ----------------------------------------------------------------------
>
>                 Key: CARBONDATA-1773
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1773
>             Project: CarbonData
>          Issue Type: Bug
>    Affects Versions: 1.3.0
>            Reporter: Babulal
>         Attachments: streamingLog.log
>
>
> Run below Seq of commands  in spark Shell ( bin/spark-shell --jars /opt/carbon/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
 --master yarn-client --executor-memory 1G --executor-cores 2 --driver-memory 1G
> )
> // carbon is SparkSession with CarbonStateBuilder
> carbon.sql("create table stable (value String,count String) STORED BY 'carbondata' TBLPROPERTIES
('streaming' = 'true')")
> val lines = carbon.readStream.format("socket")  .option("host", "localhost")  .option("port",
9999)  .load()
>  val words = lines.as[String].flatMap(_.split(" ")) 
>  val wordCounts = words.groupBy("value").count()
> val carbonTable = CarbonEnv.getCarbonTable(Some("default"), "stable")(carbon)
>  val tablePath = CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
> val qry = wordCounts.writeStream.format("carbondata").outputMode("complete").trigger(ProcessingTime("1
seconds")).option("tablePath", tablePath.getPath).option("checkpointLocation", tablePath.getStreamingCheckpointDir).option("tableName","stable").start()
> scala> qry.awaitTermination()
> Now in another window run below command
> root@master ~ # nc -lk 9999
> babu
> Check SparkShell
> Stage 1:>                                                        (0 + 6) / 200]17/11/19
17:59:57 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 3, slave1, executor 2): org.apache.carbondata.streaming.CarbonStreamException:
Task failed while writing rows
>         at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:286)
>         at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:192)
>         at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:191)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>         at org.apache.spark.scheduler.Task.run(Task.scala:99)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy
to field scala.collection.convert.Wrappers$SeqWrapper.underlying of type scala.collection.Seq
in instance of scala.collection.convert.Wrappers$SeqWrapper
>         at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>         at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
>         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2251)
>         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
>         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
>         at org.apache.carbondata.hadoop.util.ObjectSerializationUtil.convertStringToObject(ObjectSerializationUtil.java:99)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message