spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alfredo Gimenez (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-24295) Purge Structured streaming FileStreamSinkLog metadata compact file data.
Date Sat, 02 Mar 2019 20:19:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782510#comment-16782510
] 

Alfredo Gimenez commented on SPARK-24295:
-----------------------------------------

Our current workaround FWII:

We've added a streaming query listener that, at every query progress event, writes out a
manual checkpoint (from the QueryProgressEvent sourceOffset member that contains the last
used source offsets). We gracefully stop the stream job every 6 hours, purge the _spark_metadata
and spark checkpoints, and upon restart check for the existence of the manual checkpoint and
use it if available. We do the stop/purge/restart via Airflow but it would be trivial to do
this by looping around a stream awaitTermination with a provided timeout. 

A simple solution would be to just have an option to disable metadata file compaction that
also allows old metadata files to be deleted after a delay. Currently it appears that all
files stay around until compaction, upon which files older than the delay and not in the compaction
are purged.

> Purge Structured streaming FileStreamSinkLog metadata compact file data.
> ------------------------------------------------------------------------
>
>                 Key: SPARK-24295
>                 URL: https://issues.apache.org/jira/browse/SPARK-24295
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.3.0
>            Reporter: Iqbal Singh
>            Priority: Major
>         Attachments: spark_metadatalog_compaction_perfbug_repro.tar.gz
>
>
> FileStreamSinkLog metadata logs are concatenated to a single compact file after defined
compact interval.
> For long running jobs, compact file size can grow up to 10's of GB's, Causing slowness 
while reading the data from FileStreamSinkLog dir as spark is defaulting to the "__spark__metadata"
dir for the read.
> We need a functionality to purge the compact file size.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message