Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 0AF6C200BFE for ; Sun, 11 Dec 2016 08:22:51 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 089E1160B07; Sun, 11 Dec 2016 07:22:51 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 5645E160B2C for ; Sun, 11 Dec 2016 08:22:50 +0100 (CET) Received: (qmail 66517 invoked by uid 500); 11 Dec 2016 07:22:49 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 66461 invoked by uid 99); 11 Dec 2016 07:22:49 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 11 Dec 2016 07:22:49 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id DB7AEE040F; Sun, 11 Dec 2016 07:22:48 +0000 (UTC) From: ericl To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request #16135: [SPARK-18700][SQL] Add StripedLock for each table... Content-Type: text/plain Message-Id: <20161211072248.DB7AEE040F@git1-us-west.apache.org> Date: Sun, 11 Dec 2016 07:22:48 +0000 (UTC) archived-at: Sun, 11 Dec 2016 07:22:51 -0000 Github user ericl commented on a diff in the pull request: https://github.com/apache/spark/pull/16135#discussion_r91849698 --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala --- @@ -352,4 +353,28 @@ class PartitionedTablePerfStatsSuite } } } + + test("SPARK-18700: add lock for each table's realation in cache") { + withTable("test") { + withTempDir { dir => + HiveCatalogMetrics.reset() + setupPartitionedHiveTable("test", dir) + // select the table in multi-threads + val executorPool = Executors.newFixedThreadPool(10) + (1 to 10).map(threadId => { + val runnable = new Runnable { + override def run(): Unit = { + spark.sql("select * from test where partCol1 = 999").count() + } + } + executorPool.execute(runnable) + None + }) + executorPool.shutdown() + executorPool.awaitTermination(30, TimeUnit.SECONDS) + // check the cache hit, the cache only load once + assert(HiveCatalogMetrics.METRIC_DATASOUCE_TABLE_CACHE_HITS.getCount() == 9) --- End diff -- Yeah that's fine, as long as it fails some fraction of the time it will eventually show up as a flaky test. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org