hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gera Shegalov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-6166) Reducers do not catch bad map output transfers during shuffle if data shuffled directly to disk
Date Wed, 03 Dec 2014 23:26:13 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233666#comment-14233666
] 

Gera Shegalov commented on MAPREDUCE-6166:
------------------------------------------

Hi [~eepayne], sorry for the delay. I knew what modifications you were talking about. But
I did not have the time to verify and convince myself whether this double checksumming was
really needed in the Merger. I did though run a version of the patch that would implement
my suggestion above through a couple of UT and did not see any issues. 
{code}
mapreduce.task.reduce.TestFetcher
mapreduce.task.reduce.TestMergeManager
mapreduce.task.reduce.TestMerger
{code}
That's why I hoped you'd point me where a failure occurs. That's my current status about it.
I hope to get back to it soon again.  

> Reducers do not catch bad map output transfers during shuffle if data shuffled directly
to disk
> -----------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6166
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6166
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2
>    Affects Versions: 2.6.0
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: MAPREDUCE-6166.v1.201411221941.txt, MAPREDUCE-6166.v2.201411251627.txt
>
>
> In very large map/reduce jobs (50000 maps, 2500 reducers), the intermediate map partition
output gets corrupted on disk on the map side. If this corrupted map output is too large to
shuffle in memory, the reducer streams it to disk without validating the checksum. In jobs
this large, it could take hours before the reducer finally tries to read the corrupted file
and fails. Since retries of the failed reduce attempt will also take hours, this delay in
discovering the failure is multiplied greatly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message