hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Miles Crawford <mil...@allenai.org>
Subject Re: Control rate of preemption?
Date Tue, 12 Apr 2016 20:58:10 GMT
In looking at the code I found two undocumented config properties:

yarn.scheduler.fair.preemptionInterval
yarn.scheduler.fair.waitTimeBeforeKill

But these don't seem to enough for me, since it appears the fair
scheduler will still preempt as many containers as it would like in a
single operation.  I was hoping for something like:

yarn.scheduler.fair.maxContainersToPreemptPerInterval

So that I could smooth out the rebalance operation over a longer time...

-m

On Mon, Apr 11, 2016 at 9:24 AM, Miles Crawford <milesc@allenai.org> wrote:
>
> I'm using the YARN fair scheduler to allow a group of users to equally share
> a cluster for running Spark jobs.
>
> Works great, but when a large rebalance happens, Spark sometimes can't keep
> up, and the job fails.
>
> Is there any way to control the rate at which YARN preempts resources? I'd
> love to limit the killing of containers to a slower pace, so Spark has a
> chance to keep up.
>
> Thanks,
> -miles

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Mime
View raw message