Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7A5EC18108 for ; Thu, 12 Nov 2015 15:47:11 +0000 (UTC) Received: (qmail 6284 invoked by uid 500); 12 Nov 2015 15:47:11 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 6249 invoked by uid 500); 12 Nov 2015 15:47:11 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 6219 invoked by uid 99); 12 Nov 2015 15:47:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 Nov 2015 15:47:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 2335E2C1F5A for ; Thu, 12 Nov 2015 15:47:11 +0000 (UTC) Date: Thu, 12 Nov 2015 15:47:11 +0000 (UTC) From: "Yin Huai (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-11661) We should still pushdown filters returned by a data source's unhandledFilters MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15002268#comment-15002268 ] Yin Huai commented on SPARK-11661: ---------------------------------- It started to fail before my pr (see https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/4073/). > We should still pushdown filters returned by a data source's unhandledFilters > ----------------------------------------------------------------------------- > > Key: SPARK-11661 > URL: https://issues.apache.org/jira/browse/SPARK-11661 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: Yin Huai > Assignee: Yin Huai > Priority: Blocker > Fix For: 1.6.0 > > > We added unhandledFilters interface to SPARK-10978. So, a data source has a chance to let Spark SQL know that for those returned filters, it is possible that the data source will not apply them to every row. So, Spark SQL should use a Filter operator to evaluate those filters. However, if a filter is a part of returned unhandledFilters, we should still push it down. For example, our internal data sources do not override this method, if we do not push down those filters, we are actually turning off the filter pushdown feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org