hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karthik Kambatla (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-5817) Mappers get rescheduled on node transition even after all reducers are completed
Date Fri, 14 Aug 2015 19:43:46 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Karthik Kambatla updated MAPREDUCE-5817:
----------------------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 2.8.0
           Status: Resolved  (was: Patch Available)

Just committed this to trunk and branch-2. 

Thanks [~sjlee0] for the contribution and [~chris.douglas] for the review. 

> Mappers get rescheduled on node transition even after all reducers are completed
> --------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5817
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5817
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster
>    Affects Versions: 2.3.0
>            Reporter: Sangjin Lee
>            Assignee: Sangjin Lee
>             Fix For: 2.8.0
>
>         Attachments: MAPREDUCE-5817.001.patch, MAPREDUCE-5817.002.patch, mapreduce-5817.patch
>
>
> We're seeing a behavior where a job runs long after all reducers were already finished.
We found that the job was rescheduling and running a number of mappers beyond the point of
reducer completion. In one situation, the job ran for some 9 more hours after all reducers
completed!
> This happens because whenever a node transition (to an unusable state) comes into the
app master, it just reschedules all mappers that already ran on the node in all cases.
> Therefore, if any node transition has a potential to extend the job period. Once this
window opens, another node transition can prolong it, and this can happen indefinitely in
theory.
> If there is some instability in the pool (unhealthy, etc.) for a duration, then any big
job is severely vulnerable to this problem.
> If all reducers have been completed, JobImpl.actOnUnusableNode() should not reschedule
mapper tasks. If all reducers are completed, the mapper outputs are no longer needed, and
there is no need to reschedule mapper tasks as they would not be consumed anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message