spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shubhanshu Mishra (JIRA)" <>
Subject [jira] [Commented] (SPARK-14103) Python DataFrame CSV load on large file is writing to console in Ipython
Date Mon, 28 Mar 2016 23:20:25 GMT


Shubhanshu Mishra commented on SPARK-14103:

[~srowen] I just checked the Spark Code on github and found that the line separator [mentioned
as rowSeparator] is hard coded as "\n" in the code.

Ideally, the line separator should be fetched from the platform dependent settings CRLF for
Windows and LF for Unix based systems. Also, when users define a custom RowSeparator it should
override the default settings. 

This might be causing issues. I can send a PR for accepting the line separators and setting
the defaults to the System specific setting. 

> Python DataFrame CSV load on large file is writing to console in Ipython
> ------------------------------------------------------------------------
>                 Key: SPARK-14103
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>         Environment: Ubuntu, Python 2.7.11, Anaconda 2.5.0, Spark from Master branch
>            Reporter: Shubhanshu Mishra
>              Labels: csv, csvparser, dataframe, pyspark
> I am using the spark from the master branch and when I run the following command on a
large tab separated file then I get the contents of the file being written to the stderr
> {code}
> df ="temp.txt", format="csv", header="false", inferSchema="true",
> {code}
> Here is a sample of output:
> {code}
> ^M[Stage 1:>                                                          (0 + 2) / 2]16/03/23
14:01:02 ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 2)
> com.univocity.parsers.common.TextParsingException: Error processing input: Length of
parsed input (1000001) exceeds the maximum number of characters defined in your parser settings
(1000000). Identified line separator characters in the parsed content. This may be the cause
of the error. The line separator in your parser settings is set to '\n'. Parsed content:
>         Privacy-shake",: a haptic interface for managing privacy settings in mobile location
sharing applications       privacy shake a haptic interface for managing privacy settings
in mobile location sharing applications  2010    2010/09/07              international conference
on human computer interaction  interact                43331058        19371[\n]        3D4F6CA1
       Between the Profiles: Another such Bias. Technology Acceptance Studies on Social Network
Services       between the profiles another such bias technology acceptance studies on social
network services 2015    2015/08/02      10.1007/978-3-319-21383-5_12    international conference
on human-computer interaction  interact                43331058        19502[\n]
> .......
> .........
> web snippets    2008    2008/05/04      10.1007/978-3-642-01344-7_13    international
conference on web information systems and technologies    webist          44F29802       
> 06FA3FFA        Interactive 3D User Interfaces for Neuroanatomy Exploration     interactive
3d user interfaces for neuroanatomy exploration     2009                    internationa]
>         at com.univocity.parsers.common.AbstractParser.handleException(
>         at com.univocity.parsers.common.AbstractParser.parseNext(
>         at
>         at
>         at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>         at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.foreach(CSVParser.scala:120)
>         at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:155)
>         at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.foldLeft(CSVParser.scala:120)
>         at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:212)
>         at org.apache.spark.sql.execution.datasources.csv.BulkCsvReader.aggregate(CSVParser.scala:120)
>         at org.apache.spark.rdd.RDD$$anonfun$aggregate$1$$anonfun$22.apply(RDD.scala:1058)
>         at org.apache.spark.rdd.RDD$$anonfun$aggregate$1$$anonfun$22.apply(RDD.scala:1058)
>         at org.apache.spark.SparkContext$$anonfun$35.apply(SparkContext.scala:1827)
>         at org.apache.spark.SparkContext$$anonfun$35.apply(SparkContext.scala:1827)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
>         at
>         at org.apache.spark.executor.Executor$
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> Caused by: java.lang.ArrayIndexOutOfBoundsException
> 16/03/23 14:01:03 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting
> ^M[Stage 1:>                                                          (0 + 1) / 2]
> {code}
> For a small sample (<10,000 lines) of the data, I am not getting any error. But as
soon as I go above more than 100,000 samples, I start getting the error. 
> I don't think the spark platform should output the actual data to stderr ever as it decreases
the readability. 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message