hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack@archive.org (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-86) If corrupted map outputs, reducers get stuck fetching forever
Date Fri, 17 Mar 2006 22:24:03 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-86?page=comments#action_12370898 ] 

stack@archive.org commented on HADOOP-86:
-----------------------------------------

Testing over w/e.

> If corrupted map outputs, reducers get stuck fetching forever
> -------------------------------------------------------------
>
>          Key: HADOOP-86
>          URL: http://issues.apache.org/jira/browse/HADOOP-86
>      Project: Hadoop
>         Type: Bug
>     Reporter: stack@archive.org
>  Attachments: mapout.patch
>
> In our rack, there is a machine that reliably corrupts map output parts.  When reducers
try to pickup the map output, Server#Handler checks the checksum, notices corruption, moves
the bad map output part aside and throws a ChecksumException.  Undeterred, the reducer comes
back again minutes later only this time it gets a FileNotFoundException out of Server#Handler
(Because the part was moved aside).  And so it goes till the cows come home.
> Doug applied a patch that in map output  file, when it notices a fatal exception, it
logs a severe error on the TaskTracker#LOG. Then in TT, if a severe logging has occurred,
TT does a soft restart (TT stays up but closes down all services and then goes through init
again).  This patch was committed (after I suggested it was working), only, later, I noticed
the severe log flag is not cleared across TT restart so TT goes into a cycle of continuous
restarts.  
> A further patch that clears the severe flag was posted to the list.  This improves things
but has issues too in that on revival, the TT continues to be plagued by reducers looking
for parts no longer available for a period of ten minutes or so until the JobTracker gets
around to updating them about change in where to go get map outputs.  During this period,
the TT gets restarted 5-10 times -- but eventually comes back on line (There may have been
too much damage done during this period of flux making it so the job will fail).
> This issue covers implementing a better solution.  
> Suggestions include having the TT stay down a period to avoid the incoming reducers or
somehow examining the incoming reducer request, checking its list of tasks to see if it knows
anything of the reducers' request and rejecting it with a non-severe error if not a map of
the currently running TT.  A little birdie (named DC) suggests a better soln. is probably
an addition to intertrackerprotocol so either the TT or the reducer updates JT when corrupted
map output.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message