hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nathan Roberts (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-5308) Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs
Date Fri, 07 Jun 2013 19:40:20 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13678345#comment-13678345
] 

Nathan Roberts commented on MAPREDUCE-5308:
-------------------------------------------

Verified fix on both trunk and 0.23 using the following 100MB terasort:
- io.file.buffer.size set to 1024 (improves the likelihood of the problem occuring)
- teragen with 1000000 records
- terasort -Dmapred.max.split.size=1000000
- Tested with DefaultCodec,GzipCodec,BzipCodec,DeflateCodec,Lz4Codec, and no-compression
- Default, Gzip, Bzip, Deflate - all originally were able to reproduce the problem. With the
fix there were no fetch failures in any of the cases.





                
> Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs
> -------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5308
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5308
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: trunk, 2.0.3-alpha, 0.23.8
>            Reporter: Nathan Roberts
>            Assignee: Nathan Roberts
>         Attachments: MAPREDUCE-5308-branch-0.23.txt, MAPREDUCE-5308.patch
>
>
> When a reducer is fetching multiple compressed map outputs from a host, the fetcher can
get out-of-sync with the IFileInputStream, causing several of the maps to fail to fetch.
> This occurs because decompressors can return all the decompressed bytes before actually
processing all the bytes in the compressed stream (due to checksums or other trailing data
that we ignore). In the unfortunate case where these extra bytes cross an io.file.buffer.size
boundary, some extra bytes will be left over and the next map_output will not fetch correctly
(usually due to an invalid map_id).
> This scenario is not typically fatal to a job because the failure is charged to the map_output
immediately following the "bad" one and the subsequent retry will normally work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message