hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-1408) Preemption caused Invalid State Event: ACQUIRED at KILLED and caused a task timeout for 30mins
Date Mon, 30 Jun 2014 15:35:26 GMT

    [ https://issues.apache.org/jira/browse/YARN-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14047755#comment-14047755
] 

Wangda Tan commented on YARN-1408:
----------------------------------

Hi [~sunilg],
Thanks for updating the patch, overall approach LGTM, some comments,

1)
bq. I think we can have a new api in appSchedulingInfo to return list of ResourceRequests
(node local, rack local and any).
I would suggest to modify existing appSchedulingInfo.allocate to return list of RRs. There
existed outstanding resource decrement logic in allocate(), we can simply add decremented
RR to a list and return them. It looks more like by-product of ASI.allocate to me.

2)
{code}
    if (type.equals(NodeType.NODE_LOCAL)) {
      list.add(nodeRequests.get(hostName));
    }
{code}
It's better to clone RR instead of add ref to list. It works, but it's better to set a #container
correctly and prevent RR changed in ASI in the future.

3) TestCapacityScheduler:
It's good to have a test for FairScheduler here too. I think we can put the test to org.apache.hadoop.yarn.server.resourcemanager.scheduler,
and make it parameterized for Fair/Capacity/FIFO.

Two minor comment for TestCapacityScheduler.
3.1 
{code}
    for (ResourceRequest request : requests) {
      // Skip the OffRack and RackLocal resource requests.
      if (request.getResourceName().equals(node.getRackName())
          || request.getResourceName().equals(ResourceRequest.ANY)) {
        Assert.assertEquals(request.getNumContainers(), 1);
        continue;
      }
      
      // Resource request must have added back in RM after preempt event handling.
      Assert.assertNotNull(app.getResourceRequest(request.getPriority(),
        request.getResourceName()));
    }
{code}
We can make it simpler to,
{code}
    for (ResourceRequest request : requests) {
      // Resource request must have added back in RM after preempt event handling.
      Assert.assertEquals(1, app.getResourceRequest(request.getPriority(),
        request.getResourceName()).getNumContainers());
    }
{code}
Because we added them back, there's no difference between node/rack/any.

3.2
{code}
    // allocate container
    List<Container> containers = am1.allocate(new ArrayList<ResourceRequest>(),
        new ArrayList<ContainerId>()).getAllocatedContainers();

{code}
Should we wait for containers allocated in a while loop? This works now because previous we
called "rm1.waitForState(nm1, ...)". But it's better to wait container allocated explictly


Thanks,
Wangda

> Preemption caused Invalid State Event: ACQUIRED at KILLED and caused a task timeout for
30mins
> ----------------------------------------------------------------------------------------------
>
>                 Key: YARN-1408
>                 URL: https://issues.apache.org/jira/browse/YARN-1408
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: resourcemanager
>    Affects Versions: 2.2.0
>            Reporter: Sunil G
>            Assignee: Sunil G
>         Attachments: Yarn-1408.1.patch, Yarn-1408.2.patch, Yarn-1408.3.patch, Yarn-1408.4.patch,
Yarn-1408.5.patch, Yarn-1408.6.patch, Yarn-1408.patch
>
>
> Capacity preemption is enabled as follows.
>  *  yarn.resourcemanager.scheduler.monitor.enable= true ,
>  *  yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
> Queue = a,b
> Capacity of Queue A = 80%
> Capacity of Queue B = 20%
> Step 1: Assign a big jobA on queue a which uses full cluster capacity
> Step 2: Submitted a jobB to queue b  which would use less than 20% of cluster capacity
> JobA task which uses queue b capcity is been preempted and killed.
> This caused below problem:
> 1. New Container has got allocated for jobA in Queue A as per node update from an NM.
> 2. This container has been preempted immediately as per preemption.
> Here ACQUIRED at KILLED Invalid State exception came when the next AM heartbeat reached
RM.
> ERROR org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: Can't
handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: ACQUIRED
at KILLED
> This also caused the Task to go for a timeout for 30minutes as this Container was already
killed by preemption.
> attempt_1380289782418_0003_m_000000_0 Timed out after 1800 secs



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message