hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-2256) FairScheduler fairshare preemption from multiple pools may preempt all tasks from one pool causing that pool to go below fairshare.
Date Sat, 12 Feb 2011 14:31:10 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12993912#comment-12993912
] 

Hudson commented on MAPREDUCE-2256:
-----------------------------------

Integrated in Hadoop-Mapreduce-22-branch #33 (See [https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/33/])
    

> FairScheduler fairshare preemption from multiple pools may preempt all tasks from one
pool causing that pool to go below fairshare.
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2256
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2256
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/fair-share
>    Affects Versions: 0.21.1, 0.22.0
>            Reporter: Priyo Mustafi
>            Assignee: Priyo Mustafi
>             Fix For: 0.22.0
>
>         Attachments: mapreduce-2256_0_22.txt
>
>
> Scenarios:
> You have a cluster with 600 map slots and 3 pools.  Fairshare for each pool is 200 to
start with.  Fairsharepreemption timeout is 5 mins.
> 1)  Pool1 schedules 300 map tasks first
> 2)  Pool2 then schedules another 300 map tasks
> 3)  Pool3 demands 300 map tasks but doesn't get any slot as all slots are taken.
> 4)  After 5 mins pool3 should preempt 200 map-slots.  Instead of peempting 100 slots
each from pool1 and pool2, the bug would cause it to preempt all 200 slots from pool2 (last
started) causing it to go below fairshare.  This is happening because the preemptTask method
is not reducing the tasks left from a pool while preempting the tasks.  
> The above scenario could be an extreme case but some amount of excess preemption would
happen because of this bug.
> The patch I created was for 0.22.0 but the code fix should work on 0.21  as well as looks
like it has the same bug.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message