hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matteo Luzzi <matteo.lu...@gmail.com>
Subject Problems in running spark on Yarn
Date Tue, 01 Sep 2015 22:25:52 GMT
Hi all!
I'm developing a system where I need to run spark jobs over yarn. I'm using
a two node cluster (one master and one slave) for testing and I'm
submitting the application through oozie, but after the first application
starts running (the oozie container) the other one remains in accepted
stated. I am new to yarn so probably I am missing some concepts about how
containers are requested and assigned to the applications. It seems that I
can execute only one container at the time, even though there are still
free resources. When I kill the first running application, the other one
passes to running state. I'm also using the Fair Scheduler as according the
documentation, it should avoid any starvation problems.
I don't know if it is a problem of spark or yarn. Please come with
suggestion if you have any.

[image: Immagine incorporata 1]

View raw message