Return-Path: X-Original-To: apmail-spark-reviews-archive@minotaur.apache.org Delivered-To: apmail-spark-reviews-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5DE1A19261 for ; Wed, 8 Jun 2016 19:06:01 +0000 (UTC) Received: (qmail 83947 invoked by uid 500); 8 Jun 2016 19:06:01 -0000 Delivered-To: apmail-spark-reviews-archive@spark.apache.org Received: (qmail 83938 invoked by uid 500); 8 Jun 2016 19:06:01 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 83927 invoked by uid 99); 8 Jun 2016 19:06:00 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Jun 2016 19:06:00 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id D513FDFCC0; Wed, 8 Jun 2016 19:06:00 +0000 (UTC) From: sameeragarwal To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org Message-ID: Subject: [GitHub] spark pull request #13566: [SPARK-15678] Add support to REFRESH data source ... Content-Type: text/plain Date: Wed, 8 Jun 2016 19:06:00 +0000 (UTC) GitHub user sameeragarwal opened a pull request: https://github.com/apache/spark/pull/13566 [SPARK-15678] Add support to REFRESH data source paths ## What changes were proposed in this pull request? Spark currently incorrectly continues to use cached data even if the underlying data is overwritten. Current behavior: ```scala val dir = "/tmp/test" sqlContext.range(1000).write.mode("overwrite").parquet(dir) val df = sqlContext.read.parquet(dir).cache() df.count() // outputs 1000 sqlContext.range(10).write.mode("overwrite").parquet(dir) sqlContext.read.parquet(dir).count() // outputs 1000 <---- We are still using the cached dataset ``` This patch fixes this bug by adding support for `REFRESH path` that invalidates and refreshes all the cached data (and the associated metadata) for any dataframe that contains the given data source path. Expected behavior: ```scala val dir = "/tmp/test" sqlContext.range(1000).write.mode("overwrite").parquet(dir) val df = sqlContext.read.parquet(dir).cache() df.count() // outputs 1000 sqlContext.range(10).write.mode("overwrite").parquet(dir) spark.catalog.refreshResource(dir) sqlContext.read.parquet(dir).count() // outputs 10 <---- We are not using the cached dataset ``` ## How was this patch tested? Unit tests for overwrites and appends in `ParquetQuerySuite` and `CachedTableSuite`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/sameeragarwal/spark refresh-path-2 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/13566.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #13566 ---- commit ece34abd63176c6192ebe4ef05f2e8799ff52955 Author: Sameer Agarwal Date: 2016-06-06T23:15:42Z [SPARK-15678] Add support to REFRESH data source paths ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org