Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1E6FA1862D for ; Fri, 29 Jan 2016 20:52:40 +0000 (UTC) Received: (qmail 57065 invoked by uid 500); 29 Jan 2016 20:52:40 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 57032 invoked by uid 500); 29 Jan 2016 20:52:40 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 57019 invoked by uid 99); 29 Jan 2016 20:52:40 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 Jan 2016 20:52:40 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id D6FBF2C0AFA for ; Fri, 29 Jan 2016 20:52:39 +0000 (UTC) Date: Fri, 29 Jan 2016 20:52:39 +0000 (UTC) From: "Apache Spark (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-13082) sqlCtx.real.json() doesn't work with PythonRDD MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-13082?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D151= 24189#comment-15124189 ]=20 Apache Spark commented on SPARK-13082: -------------------------------------- User 'zsxwing' has created a pull request for this issue: https://github.com/apache/spark/pull/10988 > sqlCtx.real.json() doesn't work with PythonRDD > ---------------------------------------------- > > Key: SPARK-13082 > URL: https://issues.apache.org/jira/browse/SPARK-13082 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.6.0 > Environment: Tested on macosx 10.10 using Spark 1.6 > Reporter: Ga=C3=ABtan Lehmann > > This code works without problem: > sqlCtx.read.json(sqlCtx.range(10).toJSON()) > but these ones fail with the traceback below: > sqlCtx.read.json(sc.parallelize(['{"id":1}']*10)) > sqlCtx.read.json(sqlCtx.range(10).toJSON().pipe("cat")) > sqlCtx.read.json(sqlCtx.range(10).toJSON().map(lambda x: x)) > -------------------------------------------------------------------------= -- > Py4JJavaError Traceback (most recent call las= t) > in () > ----> 1 sqlCtx.read.json(sqlCtx.range(10).toJSON().map(lambda x: x)) > /usr/local/Cellar/apache-spark/1.6.0/libexec/python/pyspark/sql/readwrite= r.pyc in json(self, path, schema) > 178 return self._df(self._jreader.json(self._sqlContext._= sc._jvm.PythonUtils.toSeq(path))) > 179 elif isinstance(path, RDD): > --> 180 return self._df(self._jreader.json(path._jrdd)) > 181 else: > 182 raise TypeError("path can be only string or RDD") > /usr/local/Cellar/apache-spark/1.6.0/libexec/python/lib/py4j-0.9-src.zip/= py4j/java_gateway.py in __call__(self, *args) > 811 answer =3D self.gateway_client.send_command(command) > 812 return_value =3D get_return_value( > --> 813 answer, self.gateway_client, self.target_id, self.nam= e) > 814=20 > 815 for temp_arg in temp_args: > /usr/local/Cellar/apache-spark/1.6.0/libexec/python/pyspark/sql/utils.pyc= in deco(*a, **kw) > 43 def deco(*a, **kw): > 44 try: > ---> 45 return f(*a, **kw) > 46 except py4j.protocol.Py4JJavaError as e: > 47 s =3D e.java_exception.toString() > /usr/local/Cellar/apache-spark/1.6.0/libexec/python/lib/py4j-0.9-src.zip/= py4j/protocol.py in get_return_value(answer, gateway_client, target_id, nam= e) > 306 raise Py4JJavaError( > 307 "An error occurred while calling {0}{1}{2}.\n= ". > --> 308 format(target_id, ".", name), value) > 309 else: > 310 raise Py4JError( > Py4JJavaError: An error occurred while calling o961.json. > : org.apache.spark.SparkException: Job aborted due to stage failure: Task= 0 in stage 55.0 failed 1 times, most recent failure: Lost task 0.0 in stag= e 55.0 (TID 149, localhost): java.lang.ClassCastException: [B cannot be cas= t to java.lang.String > =09at org.apache.spark.sql.execution.datasources.json.InferSchema$$anonfu= n$1$$anonfun$apply$1.apply(InferSchema.scala:53) > =09at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > =09at scala.collection.Iterator$class.foreach(Iterator.scala:727) > =09at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > =09at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.sca= la:144) > =09at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) > =09at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.sc= ala:201) > =09at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply= (RDD.scala:1121) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply= (RDD.scala:1121) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply= (RDD.scala:1122) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply= (RDD.scala:1122) > =09at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20= .apply(RDD.scala:710) > =09at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20= .apply(RDD.scala:710) > =09at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scal= a:38) > =09at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) > =09at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) > =09at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) > =09at org.apache.spark.scheduler.Task.run(Task.scala:89) > =09at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:21= 3) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1142) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:617) > =09at java.lang.Thread.run(Thread.java:745) > Driver stacktrace: > =09at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$= DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) > =09at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply= (DAGScheduler.scala:1419) > =09at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply= (DAGScheduler.scala:1418) > =09at scala.collection.mutable.ResizableArray$class.foreach(ResizableArra= y.scala:59) > =09at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > =09at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.sca= la:1418) > =09at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFaile= d$1.apply(DAGScheduler.scala:799) > =09at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFaile= d$1.apply(DAGScheduler.scala:799) > =09at scala.Option.foreach(Option.scala:236) > =09at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGSche= duler.scala:799) > =09at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive= (DAGScheduler.scala:1640) > =09at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(D= AGScheduler.scala:1599) > =09at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(D= AGScheduler.scala:1588) > =09at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) > =09at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:6= 20) > =09at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) > =09at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952) > =09at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025) > =09at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope= .scala:150) > =09at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope= .scala:111) > =09at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) > =09at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1= 136) > =09at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope= .scala:150) > =09at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope= .scala:111) > =09at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) > =09at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1113) > =09at org.apache.spark.sql.execution.datasources.json.InferSchema$.infer(= InferSchema.scala:65) > =09at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonf= un$4.apply(JSONRelation.scala:114) > =09at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonf= un$4.apply(JSONRelation.scala:109) > =09at scala.Option.getOrElse(Option.scala:120) > =09at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSc= hema$lzycompute(JSONRelation.scala:109) > =09at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSc= hema(JSONRelation.scala:108) > =09at org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(int= erfaces.scala:636) > =09at org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.sca= la:635) > =09at org.apache.spark.sql.execution.datasources.LogicalRelation.(L= ogicalRelation.scala:37) > =09at org.apache.spark.sql.SQLContext.baseRelationToDataFrame(SQLContext.= scala:442) > =09at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:288= ) > =09at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:275= ) > =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp= l.java:62) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:43) > =09at java.lang.reflect.Method.invoke(Method.java:497) > =09at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) > =09at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) > =09at py4j.Gateway.invoke(Gateway.java:259) > =09at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133= ) > =09at py4j.commands.CallCommand.execute(CallCommand.java:79) > =09at py4j.GatewayConnection.run(GatewayConnection.java:209) > =09at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.S= tring > =09at org.apache.spark.sql.execution.datasources.json.InferSchema$$anonfu= n$1$$anonfun$apply$1.apply(InferSchema.scala:53) > =09at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > =09at scala.collection.Iterator$class.foreach(Iterator.scala:727) > =09at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > =09at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.sca= la:144) > =09at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) > =09at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.sc= ala:201) > =09at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply= (RDD.scala:1121) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply= (RDD.scala:1121) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply= (RDD.scala:1122) > =09at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply= (RDD.scala:1122) > =09at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20= .apply(RDD.scala:710) > =09at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20= .apply(RDD.scala:710) > =09at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scal= a:38) > =09at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) > =09at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) > =09at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) > =09at org.apache.spark.scheduler.Task.run(Task.scala:89) > =09at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:21= 3) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1142) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:617) > =09... 1 more > This seems related to SPARK-9964 -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org