hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-2883) Queuing of container requests in the NM
Date Wed, 16 Mar 2016 04:44:33 GMT

    [ https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15196751#comment-15196751
] 

Chris Douglas commented on YARN-2883:
-------------------------------------

Concerning v004:
* The use of {{getRemoteUgi}} in {{QueuingContainerManagerImpl::stopContainerInternalIfNotQueued}}
may be unnecessary, neither will it work as expected. The check for user credentials will
likely use the UGI from the {{EventDispatcher}}, not the RPC call that initiated it (in {{stopContainerInternalIfNotQueued}}).
Setting the cause to {{KILLED_BY_APPMASTER}} may be inappropriate if queued containers could
be killed for other reasons.
* If an application completes, its queued containers should be cleared.
* In {{getContainerStatusInternal}}, if the {{ConcurrentMap}} is necessary, then it should
call {{get()}} once on the instance rather than {{containsKey()}}/{{get()}}
* Rather than adding null checks for a disabled queuing context, this could support a null
context that effectively disables the queuing logic (as in {{NodeStatusUpdaterImpl}})
* It seems the queuing is not fair. New containers are started immediately, without checking
if the queue is empty. However, if the queue contains any entries, they should have started
from {{onStopMonitoringContainer}}. With a large container at the front of the queue, smaller,
queued containers will not get a chance to run while new, small containers will.
* The queue should be bounded in some way.

Minor
* {{NMContext}} can set the queuing context as final, rather than a separate {{setQueuingContext}},
which is not threadsafe as written.
* I didn't look through the test code in detail, but the {{DeletionService}} sleeping for
10s seems odd
* New loggers should use slf4j, and the {{LOG.level("Text {}", arg)}} syntax rather than {{isLevelEnabled()}}
* The default case of {{QueuingContainerManagerImpl::handle}} should throw
* {{0.f}} is a valid literal?
* {{killOpportContainers}} may want to log a warning if killing opportunistic containers is
insufficient to satisfy the contract (after the loop). This would be helpful when debugging.
* Do {{queuedGuarRequests}} and {{queuedOpportRequests}} need to be synchronized? Or is the
handler sufficient?
* {{QueuingContainersMonitorImpl::AllocatedContainerInfo}} could define equals/hashcode and
use {{Collection::remove}} instead of defining {{removeContainerFromQueue}}

> Queuing of container requests in the NM
> ---------------------------------------
>
>                 Key: YARN-2883
>                 URL: https://issues.apache.org/jira/browse/YARN-2883
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: nodemanager, resourcemanager
>            Reporter: Konstantinos Karanasos
>            Assignee: Konstantinos Karanasos
>         Attachments: YARN-2883-trunk.004.patch, YARN-2883-yarn-2877.001.patch, YARN-2883-yarn-2877.002.patch,
YARN-2883-yarn-2877.003.patch, YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can be held.
> Based on the available resources in the node and the containers in the queue, the NM
will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, the NM may
decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message