Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 39BA1200B84 for ; Tue, 20 Sep 2016 19:34:12 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 3868E160AC5; Tue, 20 Sep 2016 17:34:12 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 7DCC5160AC0 for ; Tue, 20 Sep 2016 19:34:11 +0200 (CEST) Received: (qmail 60701 invoked by uid 500); 20 Sep 2016 17:34:10 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 60686 invoked by uid 99); 20 Sep 2016 17:34:10 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 20 Sep 2016 17:34:10 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 4CB61E0158; Tue, 20 Sep 2016 17:34:10 +0000 (UTC) From: zsxwing To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request #15163: [SPARK-15698][SQL][STREAMING] Add the ability to ... Content-Type: text/plain Message-Id: <20160920173410.4CB61E0158@git1-us-west.apache.org> Date: Tue, 20 Sep 2016 17:34:10 +0000 (UTC) archived-at: Tue, 20 Sep 2016 17:34:12 -0000 Github user zsxwing commented on a diff in the pull request: https://github.com/apache/spark/pull/15163#discussion_r79668094 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLog.scala --- @@ -79,213 +76,47 @@ object SinkFileStatus { * When the reader uses `allFiles` to list all files, this method only returns the visible files * (drops the deleted files). */ -class FileStreamSinkLog(sparkSession: SparkSession, path: String) - extends HDFSMetadataLog[Array[SinkFileStatus]](sparkSession, path) { - - import FileStreamSinkLog._ +class FileStreamSinkLog( + metadataLogVersion: String, + sparkSession: SparkSession, + path: String) + extends CompactibleFileStreamLog[SinkFileStatus](metadataLogVersion, sparkSession, path) { private implicit val formats = Serialization.formats(NoTypeHints) - /** - * If we delete the old files after compaction at once, there is a race condition in S3: other - * processes may see the old files are deleted but still cannot see the compaction file using - * "list". The `allFiles` handles this by looking for the next compaction file directly, however, - * a live lock may happen if the compaction happens too frequently: one processing keeps deleting - * old files while another one keeps retrying. Setting a reasonable cleanup delay could avoid it. - */ - private val fileCleanupDelayMs = sparkSession.conf.get(SQLConf.FILE_SINK_LOG_CLEANUP_DELAY) + protected override val fileCleanupDelayMs = --- End diff -- Just resolved conflicts for these 3 confs --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org