hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Goeke <goeke.matt...@gmail.com>
Subject Re: Fair Scheduler question: Fair share and its effect on max capacity
Date Thu, 08 Nov 2012 23:19:59 GMT
Looks like my phrasing was off :)

When I said it is never able to hit max capacity I meant max capacity for
the pool (e.g. we never saw it take up the full 200 maps AND even if every
jobs uses 1 mapper could never get to 200 concurrent jobs for that pool).

--
Matt


On Thu, Nov 8, 2012 at 5:18 PM, Nan Zhu <zhunansjtu@gmail.com> wrote:

>  You set maxMaps to 200,
>
> so the maximum running mappers should be no more than 200
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Thursday, 8 November, 2012 at 6:12 PM, Matt Goeke wrote:
>
> Pretty straight forward question but can the fair share factor actually
> impact the total number of jobs / slots a pool can take up even if it is
> the only pool with active jobs submitted?
>
> We currently have a pool that has this configuration:
> "minMaps": 2,
>
> "minReduces": 1,
> "maxMaps": 200,
> "maxReduces": 66,
> "maxRunningJobs": 200,
> "minSharePreemptionTimeout": 300,
> "weight": "4.0"
>
> The total cluster capacity is over above 250 mappers but we are finding that this pool
is never able to hit that max capacity for maps OR jobs even during load tests. I was about
to bump the minMaps property but I wanted to confirm that this could potentially help alleviate
our issue.
>
> --
> Matt
>
>
>

Mime
View raw message