spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Frank Austin Nothaft <fnoth...@berkeley.edu>
Subject Remote client shutdown error
Date Mon, 09 Dec 2013 22:01:23 GMT
Hi all,

I am getting a remote client shutdown error when running my application using spark-0.8.0-incubating
on EC2; Jey and I have been looking at this most of today. My cluster is set up using the
spark-ec2 scripts. We are running two applications, one which binds successfully to the Spark
master and runs. The second application does not bind to the master, and throws the following
error:

2013-12-09 21:40:09 ERROR Client$ClientActor:64 - Connection to master failed; stopping client
2013-12-09 21:40:09 ERROR SparkDeploySchedulerBackend:64 - Disconnected from Spark cluster!
2013-12-09 21:40:09 ERROR ClusterScheduler:64 - Exiting due to error from cluster scheduler:
Disconnected from Spark cluster

Even after this failure, I am still able to run the first command again. In the application,
we've put logging to debug and get no additional information on the cause of the error. We
also tried setting the Spark master logging level to debug, but do not see any increase in
logging.

Through jdb, we traced this to org.apache.spark.deploy.client.Client$ClientActor.receive,
where the RemoteClientShutdown case matches, and markDisconnected is called. However, we have
not been able to debug further. Any advice would be greatly appreciated.

Regards,

Frank Austin Nothaft
fnothaft@berkeley.edu
fnothaft@eecs.berkeley.edu
202-340-0466



Mime
View raw message