spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prashant Sharma <scrapco...@gmail.com>
Subject Re: problems with launching executor in standalone cluster
Date Thu, 21 Nov 2013 13:48:20 GMT
You might wanna check the stderr and stdout located in work(where
standalone puts log of Executors ) directory.


On Thu, Nov 21, 2013 at 7:07 PM, Umar Javed <umarj.javed@gmail.com> wrote:

> I have a really simple standalone cluster with one worker located on the
> same machine as the master. Both master and worker launch OK with the
> scripts provided in /conf. However when I run the spark shell with the
> command: MASTER=.... ./spark-shell, my worker fails to launch. Here's a
> section of the log output:
>
> 13/11/21 06:32:34 INFO SparkDeploySchedulerBackend: Executor
> app-20131121063231-0000/4 removed: Command exited with code 1
> 13/11/21 06:32:34 INFO Client$ClientActor: Executor added:
> app-20131121063231-0000/5 on worker-20131121063035-node0-link0-52768
> (node0-link0:7077) with 2 cores
> 13/11/21 06:32:34 INFO SparkDeploySchedulerBackend: Granted executor ID
> app-20131121063231-0000/5 on hostPort node0-link0:7077 with 2 cores, 512.0
> MB RAM
> 13/11/21 06:32:34 INFO Client$ClientActor: Executor updated:
> app-20131121063231-0000/5 is now RUNNING
> 13/11/21 06:32:34 INFO Client$ClientActor: Executor updated:
> app-20131121063231-0000/5 is now FAILED (Command exited with code 1)
> 13/11/21 06:32:34 INFO SparkDeploySchedulerBackend: Executor
> app-20131121063231-0000/5 removed: Command exited with code 1
>
>
> Basically the executor on the worker keeps getting failed as soon as it is
> launched.
> Anybody have a solution?
>
> thanks!
> Umar
>



-- 
s

Mime
View raw message