spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Iqbal Singh (JIRA)" <>
Subject [jira] [Commented] (SPARK-24295) Purge Structured streaming FileStreamSinkLog metadata compact file data.
Date Fri, 01 Mar 2019 17:05:00 GMT


Iqbal Singh commented on SPARK-24295:

Agree, [~kabhwan]. It's very tricky to purge metadata as it can mess up the read for the downstream
job, I think the best solution to add the purge is by 
 * Adding a retention period based on the age of the FileSink data in the streaming job. We
already have a delete flag in the code for the compact file. By default, it can be set to
false and the user can enable it based on his requirement.
 * Add functionality for the downstream jobs to avoid using "_spark_metadata" for reading
the old dataset (by default use metadata),  as we are not purging the output data but only
metadata log for the output. Which is a bit risky too.


-- We do not have any gracefull kill for Structured streaming jobs, whenever we need to stop
a job we kill it from Command line or Resource Manager which can cause issues if the job is
processing a batch and we will get some partially processed data in the output directory.
In such cases reading from "_spark_metadata" dir is required to have exactly once guarantee else
downstream will have duplicate data. 

Thanks for working on it, I will look into the PR also for understanding.

--Iqbal Singh



> Purge Structured streaming FileStreamSinkLog metadata compact file data.
> ------------------------------------------------------------------------
>                 Key: SPARK-24295
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.3.0
>            Reporter: Iqbal Singh
>            Priority: Major
>         Attachments: spark_metadatalog_compaction_perfbug_repro.tar.gz
> FileStreamSinkLog metadata logs are concatenated to a single compact file after defined
compact interval.
> For long running jobs, compact file size can grow up to 10's of GB's, Causing slowness 
while reading the data from FileStreamSinkLog dir as spark is defaulting to the "__spark__metadata"
dir for the read.
> We need a functionality to purge the compact file size.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message