Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 183A91116A for ; Mon, 22 Sep 2014 16:06:34 +0000 (UTC) Received: (qmail 66418 invoked by uid 500); 22 Sep 2014 16:06:34 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 66394 invoked by uid 500); 22 Sep 2014 16:06:34 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 66382 invoked by uid 99); 22 Sep 2014 16:06:33 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Sep 2014 16:06:33 +0000 Date: Mon, 22 Sep 2014 16:06:33 +0000 (UTC) From: "Yin Huai (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-3641) Correctly populate SparkPlan.currentContext MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143347#comment-14143347 ] Yin Huai commented on SPARK-3641: --------------------------------- [~marmbrus] Can we populate SparkPlan.currentContext in the constructor of SQLContext instead of populate it every time before using ExistingRDD? > Correctly populate SparkPlan.currentContext > ------------------------------------------- > > Key: SPARK-3641 > URL: https://issues.apache.org/jira/browse/SPARK-3641 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.1.0 > Reporter: Yin Huai > Priority: Critical > > After creating a new SQLContext, we need to populate SparkPlan.currentContext before we create any SparkPlan. Right now, only SQLContext.createSchemaRDD populate SparkPlan.currentContext. SQLContext.applySchema is missing this call and we can have NPE as described in http://qnalist.com/questions/5162981/spark-sql-1-1-0-npe-when-join-two-cached-table. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org