hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Miklos Szegedi (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-5764) NUMA awareness support for launching containers
Date Tue, 06 Mar 2018 00:19:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387021#comment-16387021

Miklos Szegedi commented on YARN-5764:

Thank you, [~devaraj.k] for the updated patch.
3599	public static final String NM_NUMA_AWARENESS_NODE_MEMORY = NM_PREFIX
3600	+ "numa-awareness.<NODE_ID>.memory";
3601	public static final String NM_NUMA_AWARENESS_NODE_CPUS = NM_PREFIX
3602	+ "numa-awareness.<NODE_ID>.cpus";{code}
These two lines are no-op, they can probably be omitted.
Optional: Is there an example of a NUMA architecture of assymetric architecture. It might
make sense in the future to define nodes once and specify a multiplier, so that we can make
the configuration easier.
145	String[] args = new String[] {"numactl", "--hardware"};{code}
This should be {{/usr/bin/numactl}} for security reasons. In fact should not it use the configured
numactl path?
I think {{recoverCpus}} and {{recoverMemory}} can be eliminated. You could just create a Resource
object and use assignResources.
213	    NumaResourceAllocation numaNode = allocate(containerId, resource);
This is a little bit misleading. Allocate may return multiple allocations on multiple nodes
not just a single numaNode.
I have a question. {{recoverNumaResource}} reallocates the resources based on the registered
values. Where are those resources released? It looks like testRecoverNumaResource() does not
test a container allocation, release and then relaunch cycle but the opposite direction. What
is the reason for that?

> NUMA awareness support for launching containers
> -----------------------------------------------
>                 Key: YARN-5764
>                 URL: https://issues.apache.org/jira/browse/YARN-5764
>             Project: Hadoop YARN
>          Issue Type: New Feature
>          Components: nodemanager, yarn
>            Reporter: Olasoji
>            Assignee: Devaraj K
>            Priority: Major
>         Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance Results.pdf,
YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, YARN-5764-v3.patch, YARN-5764-v4.patch,
YARN-5764-v5.patch, YARN-5764-v6.patch, YARN-5764-v7.patch
> The purpose of this feature is to improve Hadoop performance by minimizing costly remote
memory accesses on non SMP systems. Yarn containers, on launch, will be pinned to a specific
NUMA node and all subsequent memory allocations will be served by the same node, reducing
remote memory accesses. The current default behavior is to spread memory across all NUMA nodes.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org

View raw message