hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3119) Memory limit check need not be enforced unless aggregate usage of all containers is near limit
Date Fri, 30 Jan 2015 22:55:36 GMT

    [ https://issues.apache.org/jira/browse/YARN-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299326#comment-14299326
] 

Wangda Tan commented on YARN-3119:
----------------------------------

IMHO, this could be problematic if a under-usage container (c1) wants to get more resource,
but the resource is over-used by another container (c2). It is possible c1 tries to allocate
but failed since memory is exhausted since NM needs some time to get resource back (kill c2).

> Memory limit check need not be enforced unless aggregate usage of all containers is near
limit
> ----------------------------------------------------------------------------------------------
>
>                 Key: YARN-3119
>                 URL: https://issues.apache.org/jira/browse/YARN-3119
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>            Reporter: Anubhav Dhoot
>            Assignee: Anubhav Dhoot
>         Attachments: YARN-3119.prelim.patch
>
>
> Today we kill any container preemptively even if the total usage of containers for that
is well within the limit for YARN. Instead if we enforce memory limit only if the total limit
of all containers is close to some configurable ratio of overall memory assigned to containers,
we can allow for flexibility in container memory usage without adverse effects. This is similar
in principle to how cgroups uses soft_limit_in_bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message