hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Abdelnur (JIRA)" <j...@apache.org>
Subject [jira] [Created] (YARN-1284) LCE: Race condition leaves dangling cgroups entries for killed containers
Date Tue, 08 Oct 2013 00:02:42 GMT
Alejandro Abdelnur created YARN-1284:
----------------------------------------

             Summary: LCE: Race condition leaves dangling cgroups entries for killed containers
                 Key: YARN-1284
                 URL: https://issues.apache.org/jira/browse/YARN-1284
             Project: Hadoop YARN
          Issue Type: Bug
          Components: nodemanager
    Affects Versions: 2.2.0
            Reporter: Alejandro Abdelnur
            Assignee: Alejandro Abdelnur
            Priority: Blocker


When LCE & cgroups are enabled, when a container is is killed (in this case by its owning
AM, an MRAM) it seems to be a race condition at OS level when doing a SIGTERM/SIGKILL and
when the OS does all necessary cleanup. 

LCE code, after sending the SIGTERM/SIGKILL and getting the exitcode, immediately attempts
to clean up the cgroups entry for the container. But this is failing with an error like:

{code}
2013-10-07 15:21:24,359 WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
Exit code from container container_1381179532433_0016_01_000011 is : 143
2013-10-07 15:21:24,359 DEBUG org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
Processing container_1381179532433_0016_01_000011 of type UPDATE_DIAGNOSTICS_MSG
2013-10-07 15:21:24,359 DEBUG org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler:
deleteCgroup: /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
2013-10-07 15:21:24,359 WARN org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler:
Unable to delete cgroup at: /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
{code}


CgroupsLCEResourcesHandler.clearLimits() has logic to wait for 500 ms for AM containers to
avoid this problem. it seems this should be done for all containers.

Still, waiting for extra 500ms seems too expensive.

We should look at a way of doing this in a more 'efficient way' from time perspective, may
be spinning while the deleteCgroup() cannot be done with a minimal sleep and a timeout.




--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message