Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E00C018AAC for ; Thu, 10 Dec 2015 16:31:11 +0000 (UTC) Received: (qmail 38715 invoked by uid 500); 10 Dec 2015 16:31:11 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 38558 invoked by uid 500); 10 Dec 2015 16:31:11 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 38512 invoked by uid 99); 10 Dec 2015 16:31:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Dec 2015 16:31:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 4A0FD2C1F68 for ; Thu, 10 Dec 2015 16:31:11 +0000 (UTC) Date: Thu, 10 Dec 2015 16:31:11 +0000 (UTC) From: "Neelesh Srinivas Salian (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-12265) Spark calls System.exit inside driver instead of throwing exception MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051190#comment-15051190 ] Neelesh Srinivas Salian commented on SPARK-12265: ------------------------------------------------- [~srowen] [~dragos] Can `SparkException` be a good wrapper for all such cases? Also, I know Yarn has YarnRuntimeException which is a wrapper in all cases that throw Runtime. Would it be a good suggestion to have such an implementation in Spark (unless already present)? > Spark calls System.exit inside driver instead of throwing exception > ------------------------------------------------------------------- > > Key: SPARK-12265 > URL: https://issues.apache.org/jira/browse/SPARK-12265 > Project: Spark > Issue Type: Bug > Components: Mesos > Affects Versions: 1.6.0 > Reporter: Iulian Dragos > > Spark may call {{System.exit}} if Mesos sends an error code back to the MesosSchedulerDriver. This makes Spark very hard to test, since this effectively kills the driver application under test. Such tests may run under ScalaTest, that doesn't get a chance to collect a result and populate a report. > Relevant code is in MesosSchedulerUtils.scala: > {code} > val ret = mesosDriver.run() > logInfo("driver.run() returned with code " + ret) > if (ret != null && ret.equals(Status.DRIVER_ABORTED)) { > System.exit(1) > } > {code} > Errors should be signaled with a {{SparkException}} in the correct thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org