hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1800) using map output fetch failures to blacklist nodes is problematic
Date Sat, 22 May 2010 14:59:19 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12870303#action_12870303
] 

Joydeep Sen Sarma commented on MAPREDUCE-1800:
----------------------------------------------

btw - we don't need distributed failure detection to cover the case of application errors
while fetching map outputs. if the map side task tracker encounters enough failures while
retrieving map outputs - it can either commit suicide or report this fact directly to the
JT (instead of relying on the reducer to do so). in that sense - such application errors are
no different from errors while trying to execute map/reduce tasks.

it seems that the only non-trivial cases that the reducer needs to report about are network
error cases - that are inherently symmetric in nature. the onus then shifts to the JT to infer
which party is to blame (if any) by looking at the collective set of errors being reported
in the system.

> using map output fetch failures to blacklist nodes is problematic
> -----------------------------------------------------------------
>
>                 Key: MAPREDUCE-1800
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1800
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Joydeep Sen Sarma
>
> If a mapper and a reducer cannot communicate, then either party could be at fault. The
current hadoop protocol allows reducers to declare nodes running the mapper as being at fault.
When sufficient number of reducers do so - then the map node can be blacklisted. 
> In cases where networking problems cause substantial degradation in communication across
sets of nodes - then large number of nodes can become blacklisted as a result of this protocol.
The blacklisting is often wrong (reducers on the smaller side of the network partition can
collectively cause nodes on the larger network partitioned to be blacklisted) and counterproductive
(rerunning maps puts further load on the (already) maxed out network links).
> We should revisit how we can better identify nodes with genuine network problems (and
what role, if any, map-output fetch failures have in this).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message