Return-Path: X-Original-To: apmail-spark-dev-archive@minotaur.apache.org Delivered-To: apmail-spark-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DE75A18EBB for ; Wed, 19 Aug 2015 16:04:51 +0000 (UTC) Received: (qmail 65535 invoked by uid 500); 19 Aug 2015 16:04:48 -0000 Delivered-To: apmail-spark-dev-archive@spark.apache.org Received: (qmail 65447 invoked by uid 500); 19 Aug 2015 16:04:48 -0000 Mailing-List: contact dev-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@spark.apache.org Received: (qmail 65437 invoked by uid 99); 19 Aug 2015 16:04:48 -0000 Received: from Unknown (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Aug 2015 16:04:48 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 00D00C0861 for ; Wed, 19 Aug 2015 16:04:48 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 5 X-Spam-Level: ***** X-Spam-Status: No, score=5 tagged_above=-999 required=6.31 tests=[HTML_MESSAGE=3, KAM_BADIPHTTP=2, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id hcVquCx3YQVf for ; Wed, 19 Aug 2015 16:04:41 +0000 (UTC) Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0110.outbound.protection.outlook.com [65.55.169.110]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id A893421165 for ; Wed, 19 Aug 2015 16:04:39 +0000 (UTC) Received: from BY2PR12MB0648.namprd12.prod.outlook.com (10.163.113.15) by BY2PR12MB0648.namprd12.prod.outlook.com (10.163.113.15) with Microsoft SMTP Server (TLS) id 15.1.231.21; Wed, 19 Aug 2015 16:03:49 +0000 Received: from BY2PR12MB0648.namprd12.prod.outlook.com ([10.163.113.15]) by BY2PR12MB0648.namprd12.prod.outlook.com ([10.163.113.15]) with mapi id 15.01.0231.024; Wed, 19 Aug 2015 16:03:49 +0000 From: Ratika Prasad To: Madhusudanan Kandasamy CC: "dev@spark.apache.org" Subject: RE: Unable to run the spark application in standalone cluster mode Thread-Topic: Unable to run the spark application in standalone cluster mode Thread-Index: AdDalm2wGw8Kw4SuQMGJis8WBs3/pQAAd7+AAAASb5A= Date: Wed, 19 Aug 2015 16:03:49 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=rprasad@couponsinc.com; x-originating-ip: [182.72.97.190] x-microsoft-exchange-diagnostics: 1;BY2PR12MB0648;5:sizxeVDwI0qPlZ4GtIJyZ53tlbrWKEgXdq/3/s9WAp/jK+9hhcKPfo6o23VuzDUaolGl13Pe3IMMDCjSkWVudNL/C06cdmFdTzLtrpVaz3Xbo+l6Cs1Wn0/2NdqnpmOopj3PSNPsf7tQTnYFpz79jg==;24:b/eK6KK2P0grZ7wEkEPH9yKvbvThqLONGSEaghdRfQs7//r86HZwENwjm8b29xX60zGcvzOj6ZqYUYAFZ70IZ4hHR3rZvYJeZtEguDIYyHE=;20:N8HOfWv6PgGG79pjitqH16dSHGTLSJmOarkUDdjtkykp78wIZOg5hJAHtMYvWRcZbUxiRcjdE91q84+HcVQ4pA== x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BY2PR12MB0648; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(108003899814671); x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(601004)(8121501046)(5005006)(3002001);SRVR:BY2PR12MB0648;BCL:0;PCL:0;RULEID:;SRVR:BY2PR12MB0648; x-forefront-prvs: 0673F5BE31 x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(6019001)(279900001)(377454003)(479174004)(189002)(164054003)(199003)(62966003)(19625305001)(76176999)(189998001)(106356001)(105586002)(101416001)(19617315012)(40100003)(16236675004)(50986999)(19627595001)(19627405001)(54356999)(102836002)(46102003)(33656002)(87936001)(86362001)(19609705001)(10400500002)(19613025002)(122556002)(76576001)(99286002)(19625215002)(19300405004)(2656002)(5002640100001)(64706001)(551934003)(99936001)(18206015028)(5001830100001)(17760045003)(5001860100001)(4001540100001)(110136002)(66066001)(97736004)(15975445007)(77096005)(5001960100002)(74316001)(81156007)(92566002)(19580405001)(5003600100002)(68736005)(77156002)(2950100001)(19580395003)(2900100001)(7099028);DIR:OUT;SFP:1102;SCL:1;SRVR:BY2PR12MB0648;H:BY2PR12MB0648.namprd12.prod.outlook.com;FPR:;SPF:None;PTR:InfoNoRecords;MX:1;A:1;LANG:; received-spf: None (protection.outlook.com: couponsinc.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:23 spamdiagnosticmetadata: NSPM Content-Type: multipart/related; boundary="_006_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_"; type="multipart/alternative" MIME-Version: 1.0 X-OriginatorOrg: couponsinc.com X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Aug 2015 16:03:49.1072 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 9779d65e-6c9a-4faf-84a8-0f3ffa3c32cd X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR12MB0648 --_006_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_ Content-Type: multipart/alternative; boundary="_000_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_" --_000_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Should this be done on master or slave node or both ? From: Madhusudanan Kandasamy [mailto:madhusudanan@in.ibm.com] Sent: Wednesday, August 19, 2015 9:31 PM To: Ratika Prasad Cc: dev@spark.apache.org Subject: Re: Unable to run the spark application in standalone cluster mode Try Increasing the spark worker memory in conf/spark-env.sh export SPARK_WORKER_MEMORY=3D2g Thanks, Madhu. [Inactive hide details for Ratika Prasad ---08/19/2015 09:22:37 PM---Ratika= Prasad ]Ratika Prasad ---08/19/2015 09:22:37 PM---= Ratika Prasad > Ratika Prasad > 08/19/2015 09:22 PM To "dev@spark.apache.org" > cc Subject Unable to run the spark application in standalone cluster mode Hi , We have a simple spark application which is running through when run locall= y on master node as below ./bin/spark-submit --class com.coupons.salestransactionprocessor.SalesTrans= actionDataPointCreation --master local sales-transaction-processor-0.0.1-SN= APSHOT-jar-with-dependencies.jar But however I try to run it in cluster mode [ our spark cluster has two nod= es one master and one slave with executer memory of 512MB], the application= fails with the below, Pls provide some inputs as to why? 15/08/19 15:37:52 INFO client.AppClient$ClientActor: Executor updated: app-= 20150819153234-0001/8 is now RUNNING 15/08/19 15:37:56 WARN scheduler.TaskSchedulerImpl: Initial job has not acc= epted any resources; check your cluster UI to ensure that workers are regis= tered and have sufficient memory 15/08/19 15:38:11 WARN scheduler.TaskSchedulerImpl: Initial job has not acc= epted any resources; check your cluster UI to ensure that workers are regis= tered and have sufficient memory 15/08/19 15:38:26 WARN scheduler.TaskSchedulerImpl: Initial job has not acc= epted any resources; check your cluster UI to ensure that workers are regis= tered and have sufficient memory 15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor updated: app-= 20150819153234-0001/8 is now EXITED (Command exited with code 1) 15/08/19 15:38:32 INFO cluster.SparkDeploySchedulerBackend: Executor app-20= 150819153234-0001/8 removed: Command exited with code 1 15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor added: app-20= 150819153234-0001/9 on worker-20150812111932-ip-172-28-161-173.us-west-2.co= mpute.internal-50108 (ip-172-28-161-173.us-west-2.compute.internal:50108) w= ith 1 cores 15/08/19 15:38:32 INFO cluster.SparkDeploySchedulerBackend: Granted executo= r ID app-20150819153234-0001/9 on hostPort ip-172-28-161-173.us-west-2.comp= ute.internal:50108 with 1 cores, 512.0 MB RAM 15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor updated: app-= 20150819153234-0001/9 is now RUNNING 15/08/19 15:38:41 WARN scheduler.TaskSchedulerImpl: Initial job has not acc= epted any resources; check your cluster UI to ensure that workers are regis= tered and have sufficient memory 15/08/19 15:38:56 WARN scheduler.TaskSchedulerImpl: Initial job has not acc= epted any resources; check your cluster UI to ensure that workers are regis= tered and have sufficient memory 15/08/19 15:39:11 WARN scheduler.TaskSchedulerImpl: Initial job has not acc= epted any resources; check your cluster UI to ensure that workers are regis= tered and have sufficient memory 15/08/19 15:39:12 INFO client.AppClient$ClientActor: Executor updated: app-= 20150819153234-0001/9 is now EXITED (Command exited with code 1) 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Executor app-20= 150819153234-0001/9 removed: Command exited with code 1 15/08/19 15:39:12 ERROR cluster.SparkDeploySchedulerBackend: Application ha= s been killed. Reason: Master removed our application: FAILED 15/08/19 15:39:12 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, wh= ose tasks have all completed, from pool 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/metrics/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages/stage/kill,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/static,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/executors/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/executors,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/environment/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/environment,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/storage/rdd/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/storage/rdd,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/storage/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/storage,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages/pool/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages/pool,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages/stage/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages/stage,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages/json,null} 15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletConte= xtHandler{/stages,null} 15/08/19 15:39:12 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0 15/08/19 15:39:12 INFO scheduler.DAGScheduler: Failed to run count at Sales= TransactionDataPointCreation.java:29 Exception in thread "main" org.apache.spark.SparkException: Job aborted due= to stage failure: Master removed our application: FAILED at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$schedul= er$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.ap= ply(DAGScheduler.scala:1174) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.ap= ply(DAGScheduler.scala:1173) at scala.collection.mutable.ResizableArray$class.foreach(ResizableA= rray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:4= 7) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.= scala:1173) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFa= iled$1.apply(DAGScheduler.scala:688) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFa= iled$1.apply(DAGScheduler.scala:688) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGS= cheduler.scala:688) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfu= n$receive$2.applyOrElse(DAGScheduler.scala:1391) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec= (AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:= 260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoi= nPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.ja= va:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorke= rThread.java:107) 15/08/19 15:39:12 WARN thread.QueuedThreadPool: 8 threads could not be stop= ped 15/08/19 15:39:12 INFO ui.SparkUI: Stopped Spark web UI at http://172.28.16= 1.131:4040 15/08/19 15:39:12 INFO scheduler.DAGScheduler: Stopping DAGScheduler 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Shutting down a= ll executors 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Asking each exe= cutor to shut down Thanks R --_000_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Should this be done on master or slav= e node or both ?

 

From: Madhusudanan Kandasamy [mailto= :madhusudanan@in.ibm.com]
Sent: Wednesday, August 19, 2015 9:31 PM
To: Ratika Prasad <rprasad@couponsinc.com>
Cc: dev@spark.apache.org
Subject: Re: Unable to run the spark application in standalone clust= er mode

 

Try Increasing the spark worker memory in = conf/spark-env.sh

e= xport SPARK_WORKER_MEMORY=3D2g

T= hanks,
Madhu.

3D"InactiveRatika Prasad ---08/19/2015 09:22:37 PM---Ratika Prasad <rprasad@couponsinc.com>

Ratika Prasad <rprasad@couponsinc.com>=  

08/19/2015 09:22 PM

To

"dev@spark.apache.org" <dev@spark.apache.org>

cc

Subject

Unable to run the spark app= lication in standalone cluster mode

 


Hi ,
 
We have a simple spark application which is running through when run local= ly on master node as below
 
./bin/spark-submit --class com.coupons.salestransactionprocessor.SalesTran= sactionDataPointCreation --master local sales-transaction-processor-0.0.1-S= NAPSHOT-jar-with-dependencies.jar
 
But however I try to run it in cluster mode [ our spark cluster has two no= des one master and one slave with executer memory of 512MB], the applicatio= n fails with the below, Pls provide some inputs as to why?
 
15/08/19 15:37:52 INFO client.AppClient$ClientActor: Executor updated: app= -20150819153234-0001/8 is now RUNNING
15/08/19 15:37:56 WARN scheduler.TaskSchedulerImpl: Initial job has not ac= cepted any resources; check your cluster UI to ensure that workers are regi= stered and have sufficient memory
15/08/19 15:38:11 WARN scheduler.TaskSchedulerImpl: Initial job has not ac= cepted any resources; check your cluster UI to ensure that workers are regi= stered and have sufficient memory
15/08/19 15:38:26 WARN scheduler.TaskSchedulerImpl: Initial job has not ac= cepted any resources; check your cluster UI to ensure that workers are regi= stered and have sufficient memory
15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor updated: app= -20150819153234-0001/8 is now EXITED (Command exited with code 1) 15/08/19 15:38:32 INFO cluster.SparkDeploySchedulerBackend: Executor app-2= 0150819153234-0001/8 removed: Command exited with code 1
15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor added: app-2= 0150819153234-0001/9 on worker-20150812111932-ip-172-28-161-173.us-west-2.c= ompute.internal-50108 (ip-172-28-161-173.us-west-2.compute.internal:50108) with 1 cores
15/08/19 15:38:32 INFO cluster.SparkDeploySchedulerBackend: Granted execut= or ID app-20150819153234-0001/9 on hostPort ip-172-28-161-173.us-west-2.com= pute.internal:50108 with 1 cores, 512.0 MB RAM
15/08/19 15:38:32 INFO client.AppClient$ClientActor: Executor updated: app= -20150819153234-0001/9 is now RUNNING
15/08/19 15:38:41 WARN scheduler.TaskSchedulerImpl: Initial job has not ac= cepted any resources; check your cluster UI to ensure that workers are regi= stered and have sufficient memory
15/08/19 15:38:56 WARN scheduler.TaskSchedulerImpl: Initial job has not ac= cepted any resources; check your cluster UI to ensure that workers are regi= stered and have sufficient memory
15/08/19 15:39:11 WARN scheduler.TaskSchedulerImpl: Initial job has not ac= cepted any resources; check your cluster UI to ensure that workers are regi= stered and have sufficient memory
15/08/19 15:39:12 INFO client.AppClient$ClientActor: Executor updated: app= -20150819153234-0001/9 is now EXITED (Command exited with code 1) 15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Executor app-2= 0150819153234-0001/9 removed: Command exited with code 1
15/08/19 15:39:12 ERROR cluster.SparkDeploySchedulerBackend: Application h= as been killed. Reason: Master removed our application: FAILED
15/08/19 15:39:12 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, w= hose tasks have all completed, from pool
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/metrics/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages/stage/kill,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/static,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/executors/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/executors,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/environment/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/environment,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/storage/rdd/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/storage/rdd,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/storage/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/storage,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages/pool/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages/pool,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages/stage/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages/stage,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages/json,null}
15/08/19 15:39:12 INFO handler.ContextHandler: stopped o.e.j.s.ServletCont= extHandler{/stages,null}
15/08/19 15:39:12 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
15/08/19 15:39:12 INFO scheduler.DAGScheduler: Failed to run count at Sale= sTransactionDataPointCreation.java:29
Exception in thread "main" org.apache.spark.SparkException: Job = aborted due to stage failure: Master removed our application: FAILED=
        at org.apache.spark.scheduler.DAGScheduler.org= $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGSchedu= ler.scala:1185)
        at org.apache.spark.scheduler.DAGScheduler$$an= onfun$abortStage$1.apply(DAGScheduler.scala:1174)
        at org.apache.spark.scheduler.DAGScheduler$$an= onfun$abortStage$1.apply(DAGScheduler.scala:1173)
        at scala.collection.mutable.ResizableArray$cla= ss.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreac= h(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abo= rtStage(DAGScheduler.scala:1173)
        at org.apache.spark.scheduler.DAGScheduler$$an= onfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at org.apache.spark.scheduler.DAGScheduler$$an= onfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.han= dleTaskSetFailed(DAGScheduler.scala:688)
        at org.apache.spark.scheduler.DAGSchedulerEven= tProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
        at akka.actor.ActorCell.receiveMessage(ActorCe= ll.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala= :456)
        at akka.dispatch.Mailbox.processMailbox(Mailbo= x.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219= )
        at akka.dispatch.ForkJoinExecutorConfigurator$= AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doEx= ec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$Work= Queue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runW= orker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThr= ead.run(ForkJoinWorkerThread.java:107)
15/08/19 15:39:12 WARN thread.QueuedThreadPool: 8 threads could not be sto= pped
15/08/19 15:39:12 INFO ui.SparkUI: Stopped Spark web UI at http://172.28.161.131:4040
15/08/19 15:39:12 INFO scheduler.DAGScheduler: Stopping DAGScheduler
15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Shutting down = all executors
15/08/19 15:39:12 INFO cluster.SparkDeploySchedulerBackend: Asking each ex= ecutor to shut down
 
Thanks
R

--_000_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_-- --_006_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_ Content-Type: image/gif; name="image001.gif" Content-Description: image001.gif Content-Disposition: inline; filename="image001.gif"; size=105; creation-date="Wed, 19 Aug 2015 16:03:46 GMT"; modification-date="Wed, 19 Aug 2015 16:03:46 GMT" Content-ID: Content-Transfer-Encoding: base64 R0lGODlhEAAQAKECAMzMzAAAAP///wAAACH5BAEAAAIALAAAAAAQABAAAAIXlI+py+0PopwxUbpu ZRfKZ2zgSJbmSRYAIf4fT3B0aW1pemVkIGJ5IFVsZWFkIFNtYXJ0U2F2ZXIhAAA7 --_006_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_ Content-Type: image/png; name="image002.png" Content-Description: image002.png Content-Disposition: inline; filename="image002.png"; size=168; creation-date="Wed, 19 Aug 2015 16:03:47 GMT"; modification-date="Wed, 19 Aug 2015 16:03:47 GMT" Content-ID: Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAADoAAAABCAMAAAC42E10AAAAAXNSR0ICQMB9xQAAAANQTFRFAAAA p3o92gAAAAF0Uk5TAEDm2GYAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAAZdEVYdFNvZnR3YXJlAE1p Y3Jvc29mdCBPZmZpY2V/7TVxAAAADElEQVQY02NgIBsAAAA7AAFzMF3lAAAAAElFTkSuQmCC --_006_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_ Content-Type: image/png; name="image003.png" Content-Description: image003.png Content-Disposition: inline; filename="image003.png"; size=166; creation-date="Wed, 19 Aug 2015 16:03:48 GMT"; modification-date="Wed, 19 Aug 2015 16:03:48 GMT" Content-ID: Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAMAAAAoyzS7AAAAAXNSR0ICQMB9xQAAAANQTFRFAAAA p3o92gAAAAF0Uk5TAEDm2GYAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAAZdEVYdFNvZnR3YXJlAE1p Y3Jvc29mdCBPZmZpY2V/7TVxAAAACklEQVQY02NgAAAAAgABmGNs1wAAAABJRU5ErkJggg== --_006_BY2PR12MB064815FA82847D9FAC97ED01CD670BY2PR12MB0648namp_--