spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Corey Nolet <cjno...@gmail.com>
Subject Re: yarn-cluster spark-submit process not dying
Date Thu, 28 May 2015 20:12:14 GMT
Thanks Sandy- I was digging through the code in the deploy.yarn.Client and
literally found that property right before I saw your reply. I'm on 1.2.x
right now which doesn't have the property. I guess I need to update sooner
rather than later.

On Thu, May 28, 2015 at 3:56 PM, Sandy Ryza <sandy.ryza@cloudera.com> wrote:

> Hi Corey,
>
> As of this PR https://github.com/apache/spark/pull/5297/files, this can
> be controlled with spark.yarn.submit.waitAppCompletion.
>
> -Sandy
>
> On Thu, May 28, 2015 at 11:48 AM, Corey Nolet <cjnolet@gmail.com> wrote:
>
>> I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
>> noticing the jvm that fires up to allocate the resources, etc... is not
>> going away after the application master and executors have been allocated.
>> Instead, it just sits there printing 1 second status updates to the
>> console. If I kill it, my job still runs (as expected).
>>
>> Is there an intended way to stop this from happening and just have the
>> local JVM die when it's done allocating the resources and deploying the
>> application master?
>>
>
>

Mime
View raw message