hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandy Ryza <sandy.r...@cloudera.com>
Subject Re: Containers and CPU
Date Tue, 02 Jul 2013 17:56:26 GMT
CPU limits are only enforced if cgroups is turned on.  With cgroups on,
they are only limited when there is contention, in which case tasks are
given CPU time in proportion to the number of cores requested for/allocated
to them.  Does that make sense?

-Sandy


On Tue, Jul 2, 2013 at 9:50 AM, Chuan Liu <chuanliu@microsoft.com> wrote:

>  I believe this is the default behavior.****
>
> By default, only memory limit on resources is enforced.****
>
> The capacity scheduler will use DefaultResourceCalculator to compute
> resource allocation for containers by default, which also does not take CPU
> into account.****
>
> ** **
>
> -Chuan****
>
> ** **
>
> *From:* John Lilley [mailto:john.lilley@redpoint.net]
> *Sent:* Tuesday, July 02, 2013 8:57 AM
> *To:* user@hadoop.apache.org
> *Subject:* Containers and CPU****
>
> ** **
>
> I have YARN tasks that benefit from multicore scaling.  However, they
> don’t **always** use more than one core.  I would like to allocate
> containers based only on memory, and let each task use as many cores as
> needed, without allocating exclusive CPU “slots” in the scheduler.  For
> example, on an 8-core node with 16GB memory, I’d like to be able to run 3
> tasks each consuming 4GB memory and each using as much CPU as they like.
> Is this the default behavior if I don’t specify CPU restrictions to the
> scheduler?****
>
> Thanks****
>
> John****
>
> ** **
>
> ** **
>

Mime
View raw message