hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod Kumar Vavilapalli (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-160) nodemanagers should obtain cpu/memory values from underlying OS
Date Thu, 08 Jan 2015 00:38:35 GMT

    [ https://issues.apache.org/jira/browse/YARN-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14268606#comment-14268606

Vinod Kumar Vavilapalli commented on YARN-160:

Quick comments on the patch:
 - LinuxResourceCalculatorPlugin: numPhysicalSockets is not used anywhere?
 - WindowsResourceCalculatorPlugin: Why is num-cores set = num-processors ?
 - yarn-default.xml: Change "it will set the X to Y" to be "it will set X to Y by default"
 - yarn.nodemanager.count-logical-processors-as-cores: Not sure of the use for this. On Linux,
shouldn't we simply use the the returned numCores if they are valid? And fall-back to numProcessors?
 - yarn.nodemanager.enable-hardware-capability-detection: I think specifying the capabilities
to be -1 is already a way to trigger this automatic detection, let's simply drop the flag
and assume it to be true all the time?
 - CGroupsLCEResourceHandler: The log message 'LOG.info("node vcores = " + nodeVCores);' is
printed for every container launch.
 - Should we enforce somewhere that numCores >= numProcessors if not that it is always
a multiple?

       int containerPhysicalMemoryMB =
            (int) (0.8f * (physicalMemoryMB - (2 * hadoopHeapSizeMB)));
We already have resource.percentage-physical-cpu-limit for CPUs - YARN-2440. How about simply
adding a resource.percentage-pmem-limit instead making it a magic number in the code? Of course,
we can have a default reserved percentage.

> nodemanagers should obtain cpu/memory values from underlying OS
> ---------------------------------------------------------------
>                 Key: YARN-160
>                 URL: https://issues.apache.org/jira/browse/YARN-160
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>    Affects Versions: 2.0.3-alpha
>            Reporter: Alejandro Abdelnur
>            Assignee: Varun Vasudev
>             Fix For: 2.7.0
>         Attachments: apache-yarn-160.0.patch, apache-yarn-160.1.patch, apache-yarn-160.2.patch,
> As mentioned in YARN-2
> *NM memory and CPU configs*
> Currently these values are coming from the config of the NM, we should be able to obtain
those values from the OS (ie, in the case of Linux from /proc/meminfo & /proc/cpuinfo).
As this is highly OS dependent we should have an interface that obtains this information.
In addition implementations of this interface should be able to specify a mem/cpu offset (amount
of mem/cpu not to be avail as YARN resource), this would allow to reserve mem/cpu for the
OS and other services outside of YARN containers.

This message was sent by Atlassian JIRA

View raw message