hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prabhu Joseph <prabhujose.ga...@gmail.com>
Subject Re: YARN Fair Scheduler
Date Tue, 23 Feb 2016 05:28:59 GMT
Hi Karthik,

   Yes all the queues are always active (atleast one job is running at a
time) and thus the fair share of all queue is very less. How to design the
fair scheduler for this kind of case. Do you have some Best Practices to
design the fair-scheduler.xml.

Weights - is the correct way to make critical queues get a bigger share.
How Nesting of queues helps. And few more doubts

1. How to configure minResources of a queue, is the sum of minResources of
all queue should be equal to Total YARN Cluster Resource.
2. What we need to consider when configuring YARN queue for Spark Jobs

Prabhu Joseph

On Tue, Feb 23, 2016 at 10:35 AM, Karthik Kambatla <kasha@cloudera.com>

> Hey Prabhu
> Are all the 250 queues always active? If not, the actual (instantaneous)
> fairshare used by the scheduler only considers the active queues (i.e.,
> those that have running applications). Otherwise, you can tune your queues
> (weights, nesting etc.) so the critical queues get a bigger share.
> Hope that helps.
> On Mon, Feb 22, 2016 at 5:07 PM, Prabhu Joseph <prabhujose.gates@gmail.com
> >
> wrote:
> > Hi All,
> >
> >    When YARN Fair Scheduler is configured with a parent root and 250
> child
> > queues for a big Cluster having total resource of 10TB and 3000 Cores.
> The
> > fair share of a child queue is very less. Fair Share is Total Cluster
> > resource / total number of child queues. How to design a Fair Scheduler
> > with many like 250 number of queues in such a way, each queue gets more
> > fair share.
> >
> > Is having Nested Queues or configuring weight or any other way to design.
> >
> > Thanks,
> > Prabhu Joseph
> >

View raw message