Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BCC99177FE for ; Fri, 13 Mar 2015 21:56:38 +0000 (UTC) Received: (qmail 44719 invoked by uid 500); 13 Mar 2015 21:56:38 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 44551 invoked by uid 500); 13 Mar 2015 21:56:38 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 44430 invoked by uid 99); 13 Mar 2015 21:56:38 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Mar 2015 21:56:38 +0000 Date: Fri, 13 Mar 2015 21:56:38 +0000 (UTC) From: "Tathagata Das (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (SPARK-6331) New Spark Master URL is not picked up when streaming context is started from checkpoint MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Tathagata Das created SPARK-6331: ------------------------------------ Summary: New Spark Master URL is not picked up when streaming context is started from checkpoint Key: SPARK-6331 URL: https://issues.apache.org/jira/browse/SPARK-6331 Project: Spark Issue Type: Bug Components: Streaming Affects Versions: 1.2.1, 1.1.1, 1.3.0 Reporter: Tathagata Das Assignee: Tathagata Das When the SparkConf is reconstructed based on the checkpointed configuration, it recovers the old master URL. This okay if the cluster on which the streaming application is relaunched is the same cluster as it was running before. But if that cluster changes, there is no way to inject the new master URL of the new cluster. As a result, the restarted app tries to connect to the non-existent old cluster and fails. The solution is to check whether a master URL is set in the System properties (by Spark submit) before recreating the SparkConf. If a new master url is set in the properties, then use it as that is obviously the most relevant one. Otherwise load the old one (to maintain existing behavior). -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org