hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chuan Liu <chuan...@microsoft.com>
Subject RE: Containers and CPU
Date Tue, 02 Jul 2013 16:50:37 GMT
I believe this is the default behavior.
By default, only memory limit on resources is enforced.
The capacity scheduler will use DefaultResourceCalculator to compute resource allocation for
containers by default, which also does not take CPU into account.


From: John Lilley [mailto:john.lilley@redpoint.net]
Sent: Tuesday, July 02, 2013 8:57 AM
To: user@hadoop.apache.org
Subject: Containers and CPU

I have YARN tasks that benefit from multicore scaling.  However, they don't *always* use more
than one core.  I would like to allocate containers based only on memory, and let each task
use as many cores as needed, without allocating exclusive CPU "slots" in the scheduler.  For
example, on an 8-core node with 16GB memory, I'd like to be able to run 3 tasks each consuming
4GB memory and each using as much CPU as they like.  Is this the default behavior if I don't
specify CPU restrictions to the scheduler?

View raw message