hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-810) Support CGroup ceiling enforcement on CPU
Date Fri, 09 Jan 2015 03:20:38 GMT

    [ https://issues.apache.org/jira/browse/YARN-810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14270492#comment-14270492

Hadoop QA commented on YARN-810:

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  against trunk revision ae91b13.

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 8 new or modified
test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager


Test results: https://builds.apache.org/job/PreCommit-YARN-Build/6286//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6286//console

This message is automatically generated.

> Support CGroup ceiling enforcement on CPU
> -----------------------------------------
>                 Key: YARN-810
>                 URL: https://issues.apache.org/jira/browse/YARN-810
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>    Affects Versions: 2.1.0-beta, 2.0.5-alpha
>            Reporter: Chris Riccomini
>            Assignee: Sandy Ryza
>         Attachments: YARN-810-3.patch, YARN-810-4.patch, YARN-810-5.patch, YARN-810-6.patch,
YARN-810.patch, YARN-810.patch
> Problem statement:
> YARN currently lets you define an NM's pcore count, and a pcore:vcore ratio. Containers
are then allowed to request vcores between the minimum and maximum defined in the yarn-site.xml.
> In the case where a single-threaded container requests 1 vcore, with a pcore:vcore ratio
of 1:4, the container is still allowed to use up to 100% of the core it's using, provided
that no other container is also using it. This happens, even though the only guarantee that
YARN/CGroups is making is that the container will get "at least" 1/4th of the core.
> If a second container then comes along, the second container can take resources from
the first, provided that the first container is still getting at least its fair share (1/4th).
> There are certain cases where this is desirable. There are also certain cases where it
might be desirable to have a hard limit on CPU usage, and not allow the process to go above
the specified resource requirement, even if it's available.
> Here's an RFC that describes the problem in more detail:
> http://lwn.net/Articles/336127/
> Solution:
> As it happens, when CFS is used in combination with CGroups, you can enforce a ceiling
using two files in cgroups:
> {noformat}
> cpu.cfs_quota_us
> cpu.cfs_period_us
> {noformat}
> The usage of these two files is documented in more detail here:
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpu.html
> Testing:
> I have tested YARN CGroups using the 2.0.5-alpha implementation. By default, it behaves
as described above (it is a soft cap, and allows containers to use more than they asked for).
I then tested CFS CPU quotas manually with YARN.
> First, you can see that CFS is in use in the CGroup, based on the file names:
> {noformat}
>     [criccomi@eat1-qa464 ~]$ sudo -u app ls -l /cgroup/cpu/hadoop-yarn/
>     total 0
>     -r--r--r-- 1 app app 0 Jun 13 16:46 cgroup.procs
>     drwxr-xr-x 2 app app 0 Jun 13 17:08 container_1371141151815_0004_01_000002
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 cpu.cfs_period_us
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 cpu.cfs_quota_us
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 cpu.rt_period_us
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 cpu.rt_runtime_us
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 cpu.shares
>     -r--r--r-- 1 app app 0 Jun 13 16:46 cpu.stat
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 notify_on_release
>     -rw-r--r-- 1 app app 0 Jun 13 16:46 tasks
>     [criccomi@eat1-qa464 ~]$ sudo -u app cat
>     /cgroup/cpu/hadoop-yarn/cpu.cfs_period_us
>     100000
>     [criccomi@eat1-qa464 ~]$ sudo -u app cat
>     /cgroup/cpu/hadoop-yarn/cpu.cfs_quota_us
>     -1
> {noformat}
> Oddly, it appears that the cfs_period_us is set to .1s, not 1s.
> We can place processes in hard limits. I have process 4370 running YARN container container_1371141151815_0003_01_000003
on a host. By default, it's running at ~300% cpu usage.
> {noformat}
>                                             CPU
>     4370 criccomi  20   0 1157m 551m  14m S 240.3  0.8  87:10.91 ...
> {noformat}
> When I set the CFS quote:
> {noformat}
>     echo 1000 > /cgroup/cpu/hadoop-yarn/container_1371141151815_0003_01_000003/cpu.cfs_quota_us
>                                              CPU
>     4370 criccomi  20   0 1157m 563m  14m S  1.0  0.8  90:08.39 ...
> {noformat}
> It drops to 1% usage, and you can see the box has room to spare:
> {noformat}
>     Cpu(s):  2.4%us,  1.0%sy,  0.0%ni, 92.2%id,  4.2%wa,  0.0%hi,  0.1%si, 0.0%st
> {noformat}
> Turning the quota back to -1:
> {noformat}
>     echo -1 > /cgroup/cpu/hadoop-yarn/container_1371141151815_0003_01_000003/cpu.cfs_quota_us
> {noformat}
> Burns the cores again:
> {noformat}
>     Cpu(s): 11.1%us,  1.7%sy,  0.0%ni, 83.9%id,  3.1%wa,  0.0%hi,  0.2%si, 0.0%st
>                                             CPU
>     4370 criccomi  20   0 1157m 563m  14m S 253.9  0.8  89:32.31 ...
> {noformat}
> On my dev box, I was testing CGroups by running a python process eight times, to burn
through all the cores, since it was doing as described above (giving extra CPU to the process,
even with a cpu.shares limit). Toggling the cfs_quota_us seems to enforce a hard limit.
> Implementation:
> What do you guys think about introducing a variable to YarnConfiguration:
> bq. yarn.nodemanager.linux-container.executor.cgroups.cpu-ceiling-enforcement
> The default would be false. Setting to true, would cause YARN's LCE to set:
> {noformat}
> cpu.cfs_quota_us=(container-request-vcores/nm-vcore-to-pcore-ratio) * 1000000
> cpu.cfs_period_us=1000000
> {noformat}
> For example, if a container asks for 2 vcores, and the vcore:pcore ratio is 4, you'd
> {noformat}
> cpu.cfs_quota_us=(2/4) * 1000000 = 500000
> cpu.cfs_period_us=1000000
> {noformat}
> This would cause CFS to cap the process at 50% of clock cycles.
> What do you guys think?
> 1. Does this seem like a reasonable request? We have some use-cases for it.
> 2. It's unclear to me how cpu.shares interacts with cpu.cfs_*. I think the ceiling is
hard, no matter what shares is set to. I assume shares only comes into play if the CFS quota
has not been reached, and the process begins competing with others for CPU resources.
> 3. Should this be an LCE config (yarn.nodemanager.linux-container-executor), or should
it be a generic scheduler config (yarn.scheduler.enforce-ceiling-vcores).

This message was sent by Atlassian JIRA

View raw message