hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Joseph Evans (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-4733) Reducer can fail to make progress during shuffle if too many reducers complete consecutively
Date Fri, 19 Oct 2012 14:36:12 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13480058#comment-13480058

Robert Joseph Evans commented on MAPREDUCE-4733:

The javadoc warnings are because there are -4 of them, that means there are 4 less then the
script expected :)  Someone fixed some and did not update it.
> Reducer can fail to make progress during shuffle if too many reducers complete consecutively
> --------------------------------------------------------------------------------------------
>                 Key: MAPREDUCE-4733
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4733
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster, mrv2
>    Affects Versions: 0.23.3
>            Reporter: Jason Lowe
>            Assignee: Jason Lowe
>         Attachments: MAPREDUCE-4733.patch
> TaskAttemptListenerImpl implements getMapCompletionEvents by calling Job.getTaskAttemptCompletionEvents
with the same fromEvent and maxEvents passed in from the reducer and then filtering the result
for just map events. We can't filter the task completion event list and expect the caller's
"window" into the list to match up.  As soon as a reducer event appears in the list it means
we are redundantly sending map completion events that were already seen by the reducer.
> Worst case the reducer will hang if all of the events in the requested window are reducer
events.  In that case zero events will be reported back to the caller and it won't bump up
fromEvent on the next call.  Reducer then never sees the final map completion events needed
to complete the shuffle. This could happen in a case where all maps complete, more than MAX_EVENTS
reducers complete consecutively, but some straggling reducers get fetch failures and cause
a map to be restarted.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message