hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Juanjuan Tian (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (YARN-7494) Add muti-node lookup mechanism and pluggable nodes sorting policies to optimize placement decision
Date Thu, 30 May 2019 09:01:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16851660#comment-16851660
] 

Juanjuan Tian  edited comment on YARN-7494 at 5/30/19 9:00 AM:
---------------------------------------------------------------

Thanks Weiwei for your reply. Here seems there is another issue in RegularContainerAllocator#allocate, 

refering to below codes,  it iterates though all nodes, but the reservedContainer doesn't
change correspondingly with the iterated node, for muti-node policy, the reservedContainer
and the iterated node will be inconsistent, and may procude incorrect ContainerAllocation(even
though this ContainerAllocation will be abondoned at last, but it seems really wastes opportunity).
[~cheersyang] what's your thought about this situation

while (iter.hasNext()) {
 FiCaSchedulerNode node = iter.next();

if (reservedContainer == null) {
 result = preCheckForNodeCandidateSet(clusterResource, node,
 schedulingMode, resourceLimits, schedulerKey);
 if (null != result)

{ continue; }

} else {
 // pre-check when allocating reserved container
 if (application.getOutstandingAsksCount(schedulerKey) == 0)

{ // Release result = new ContainerAllocation(reservedContainer, null, AllocationState.QUEUE_SKIPPED);
continue; }

}

result = tryAllocateOnNode(clusterResource, node, schedulingMode,
 resourceLimits, schedulerKey, reservedContainer);

if (AllocationState.ALLOCATED == result.getAllocationState()
 || AllocationState.RESERVED == result.getAllocationState()) {
 result = doAllocation(result, node, schedulerKey, reservedContainer);
 break;
}
 
 


was (Author: jutia):
Thanks Weiwei for your reply. Here seems there is another issue in RegularContainerAllocator#allocate, 

refering to below codes,  it iterates though all nodes, but the reservedContainer doesn't
change correspondingly with the iterated node, for muti-node policy, the reservedContainer
and the iterated node will be inconsistent, and may procude incorrect ContainerAllocation(even
though this ContainerAllocation will be abondoned at last, but it seems really wastes opportunity).
[~cheersyang] what's your thought about this situation

while (iter.hasNext()) {
 FiCaSchedulerNode node = iter.next();

if (reservedContainer == null) {
 result = preCheckForNodeCandidateSet(clusterResource, node,
 schedulingMode, resourceLimits, schedulerKey);
 if (null != result)

{ continue; }

} else {
 // pre-check when allocating reserved container
 if (application.getOutstandingAsksCount(schedulerKey) == 0)

{ // Release result = new ContainerAllocation(reservedContainer, null, AllocationState.QUEUE_SKIPPED);
continue; }

}

result = tryAllocateOnNode(clusterResource, node, schedulingMode,
 resourceLimits, schedulerKey, reservedContainer);

if (AllocationState.ALLOCATED == result.getAllocationState()
||AllocationState.RESERVED == result.getAllocationState()) \{ result = doAllocation(result,
node, schedulerKey, reservedContainer); break; }}||

 

> Add muti-node lookup mechanism and pluggable nodes sorting policies to optimize placement
decision
> --------------------------------------------------------------------------------------------------
>
>                 Key: YARN-7494
>                 URL: https://issues.apache.org/jira/browse/YARN-7494
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacity scheduler
>            Reporter: Sunil Govindan
>            Assignee: Sunil Govindan
>            Priority: Major
>             Fix For: 3.2.0
>
>         Attachments: YARN-7494.001.patch, YARN-7494.002.patch, YARN-7494.003.patch, YARN-7494.004.patch,
YARN-7494.005.patch, YARN-7494.006.patch, YARN-7494.007.patch, YARN-7494.008.patch, YARN-7494.009.patch,
YARN-7494.010.patch, YARN-7494.11.patch, YARN-7494.12.patch, YARN-7494.13.patch, YARN-7494.14.patch,
YARN-7494.15.patch, YARN-7494.16.patch, YARN-7494.17.patch, YARN-7494.18.patch, YARN-7494.19.patch,
YARN-7494.20.patch, YARN-7494.v0.patch, YARN-7494.v1.patch, multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup based on
partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message