spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Abbi McClintic (JIRA)" <>
Subject [jira] [Commented] (SPARK-21938) Spark partial CSV write fails silently
Date Thu, 07 Sep 2017 05:50:00 GMT


Abbi McClintic commented on SPARK-21938:

Sure makes sense. I'll send out a post to the mailing list and we can close this out for now.

> Spark partial CSV write fails silently
> --------------------------------------
>                 Key: SPARK-21938
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API, Spark Core
>    Affects Versions: 2.2.0
>         Environment: Amazon EMR 5.8, varying instance types
>            Reporter: Abbi McClintic
> Hello,
> My team has been experiencing a recurring unpredictable bug where only a partial write
to CSV in S3 on one partition of our Dataset is performed. For example, in a Dataset of 10
partitions written to CSV in S3, we might see 9 of the partitions as 2.8 GB in size, but one
of them as 1.6 GB. However, the job does not exit with an error code. 
> This becomes problematic in the following ways:
> 1. When we copy the data to Redshift, we get a bad decrypt error on the partial file,
suggesting that the failure occurred at a weird byte in the file. 
> 2. We lose data - sometimes as much as 10%. 
> We don't see this problem with parquet, which we also use, but moving all of our data
to parquet is not currently feasible. We're using the Java API.
> Any help on resolving this would be much appreciated.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message