spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Davies Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-5896) toDF in python doesn't work with Strings
Date Wed, 18 Feb 2015 21:35:11 GMT

    [ https://issues.apache.org/jira/browse/SPARK-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14326589#comment-14326589
] 

Davies Liu commented on SPARK-5896:
-----------------------------------

[~marmbrus] There is a mistake in your script, it should be
{code}
data = sc.parallelize([Row(name="michael")]).toDF()
{code}

> toDF in python doesn't work with Strings
> ----------------------------------------
>
>                 Key: SPARK-5896
>                 URL: https://issues.apache.org/jira/browse/SPARK-5896
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Michael Armbrust
>            Assignee: Davies Liu
>
> {code}
> from pyspark.sql import Row
> data = sc.parallelize(Row(name="michael")).toDF()
> data.collect()
> {code}
> {code}
> ---------------------------------------------------------------------------
> ValueError                                Traceback (most recent call last)
> <ipython-input-7-6f86e500a07e> in <module>()
>       1 from pyspark.sql import Row
> ----> 2 data = sc.parallelize(Row(name="michael")).toDF()
>       3 data.collect()
> /home/ubuntu/databricks/spark/python/pyspark/sql/context.pyc in toDF(self, schema, sampleRatio)
>      53         [Row(name=u'Alice', age=1)]
>      54         """
> ---> 55         return sqlCtx.createDataFrame(self, schema, sampleRatio)
>      56 
>      57     RDD.toDF = toDF
> /home/ubuntu/databricks/spark/python/pyspark/sql/context.pyc in createDataFrame(self,
data, schema, samplingRatio)
>     395 
>     396         if schema is None:
> --> 397             return self.inferSchema(data, samplingRatio)
>     398 
>     399         if isinstance(schema, (list, tuple)):
> /home/ubuntu/databricks/spark/python/pyspark/sql/context.pyc in inferSchema(self, rdd,
samplingRatio)
>     228             raise TypeError("Cannot apply schema to DataFrame")
>     229 
> --> 230         schema = self._inferSchema(rdd, samplingRatio)
>     231         converter = _create_converter(schema)
>     232         rdd = rdd.map(converter)
> /home/ubuntu/databricks/spark/python/pyspark/sql/context.pyc in _inferSchema(self, rdd,
samplingRatio)
>     158 
>     159         if samplingRatio is None:
> --> 160             schema = _infer_schema(first)
>     161             if _has_nulltype(schema):
>     162                 for row in rdd.take(100)[1:]:
> /home/ubuntu/databricks/spark/python/pyspark/sql/types.pyc in _infer_schema(row)
>     652 
>     653     else:
> --> 654         raise ValueError("Can not infer schema for type: %s" % type(row))
>     655 
>     656     fields = [StructField(k, _infer_type(v), True) for k, v in items]
> ValueError: Can not infer schema for type: <type 'str'>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message