geode-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (GEODE-194) Geode Spark Connector does not support Spark 2.0
Date Wed, 07 Jun 2017 21:27:18 GMT

    [ https://issues.apache.org/jira/browse/GEODE-194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16041648#comment-16041648
] 

ASF subversion and git services commented on GEODE-194:
-------------------------------------------------------

Commit b27a79ae91943a6ed1426f44dc4709a33eb671eb in geode's branch refs/heads/develop from
[~amb]
[ https://git-wip-us.apache.org/repos/asf?p=geode.git;h=b27a79a ]

GEODE-194: Remove spark connector

Remove the spark connector code until it can be updated
for the current spark release. We should also integrate
the build lifecycle and consider how to extract this into
a separate repo.

This closes #558


> Geode Spark Connector does not support Spark 2.0
> ------------------------------------------------
>
>                 Key: GEODE-194
>                 URL: https://issues.apache.org/jira/browse/GEODE-194
>             Project: Geode
>          Issue Type: Bug
>          Components: extensions
>            Reporter: Jianxia Chen
>              Labels: experimental, gsoc2016
>
> The BasicIntegrationTest fails when using spark 1.4. e.g.
> [info] - GemFire OQL query with more complex UDT: Partitioned Region *** FAILED ***
> [info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
stage 24.0 failed 1 times, most recent failure: Lost task 0.0 in stage 24.0 (TID 48, localhost):
scala.MatchError: 
> [info] 	Portfolio [id=3 status=active type=type3
> [info] 		AOL:Position [secId=AOL qty=978.0 mktValue=40.373], 
> [info] 		MSFT:Position [secId=MSFT qty=98327.0 mktValue=23.32]] (of class ittest.io.pivotal.gemfire.spark.connector.Portfolio)
> [info] 	at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$4.apply(CatalystTypeConverters.scala:178)
> [info] 	at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:62)
> [info] 	at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
> [info] 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> [info] 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> [info] 	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> [info] 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> [info] 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> [info] 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> [info] 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> [info] 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
> [info] 	at scala.collection.AbstractIterator.to(Iterator.scala:1157)
> [info] 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> [info] 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
> [info] 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> [info] 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
> [info] 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
> [info] 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
> [info] 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
> [info] 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
> [info] 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
> [info] 	at org.apache.spark.scheduler.Task.run(Task.scala:70)
> [info] 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> [info] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [info] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [info] 	at java.lang.Thread.run(Thread.java:745)
> [info] 
> [info] Driver stacktrace:
> [info]   at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
> [info]   at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
> [info]   at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
> [info]   at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> [info]   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> [info]   at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
> [info]   at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> [info]   at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> [info]   at scala.Option.foreach(Option.scala:236)
> [info]   at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
> [info]   ...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message