hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karthik Kambatla (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-5689) MRAppMaster does not preempt reducers when scheduled maps cannot be fulfilled
Date Fri, 03 Jan 2014 17:34:01 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Karthik Kambatla updated MAPREDUCE-5689:

       Resolution: Fixed
    Fix Version/s: 2.4.0
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Thanks Lohit. Committed this to trunk and branch-2.

> MRAppMaster does not preempt reducers when scheduled maps cannot be fulfilled
> -----------------------------------------------------------------------------
>                 Key: MAPREDUCE-5689
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5689
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Lohit Vijayarenu
>            Assignee: Lohit Vijayarenu
>            Priority: Critical
>             Fix For: 3.0.0, 2.4.0
>         Attachments: MAPREDUCE-5689.1.patch, MAPREDUCE-5689.2.patch
> We saw corner case where Jobs running on cluster were hung. Scenario was something like
this. Job was running within a pool which was running at its capacity. All available containers
were occupied by reducers and last 2 mappers. There were few more reducers waiting to be scheduled
in pipeline. 
> At this point two mappers which were running failed and went back to scheduled state.
two available containers were assigned to reducers, now whole pool was full of reducers waiting
on two maps to be complete. 2 maps never got scheduled because pool was full.
> Ideally reducer preemption should have kicked in to make room for Mappers from this code
in RMContaienrAllocator
> {code}
> int completedMaps = getJob().getCompletedMaps();
>     int completedTasks = completedMaps + getJob().getCompletedReduces();
>     if (lastCompletedTasks != completedTasks) {
>       lastCompletedTasks = completedTasks;
>       recalculateReduceSchedule = true;
>     }
>     if (recalculateReduceSchedule) {
>       preemptReducesIfNeeded();
> {code}
> But in this scenario lastCompletedTasks is always completedTasks because maps were never
completed. This would cause job to hang forever. As workaround if we kill few reducers, mappers
would get scheduled and caused job to complete.

This message was sent by Atlassian JIRA

View raw message