hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-2212) MapTask and ReduceTask should only compress/decompress the final map output file
Date Wed, 08 Dec 2010 02:31:07 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969152#action_12969152
] 

Joydeep Sen Sarma commented on MAPREDUCE-2212:
----------------------------------------------

Todd - do you know for sure that the benefit is due to the compression of the final spill
or because of the compression of the intermediate sort runs? i am thinking that if the experiment
is just to turn compression on/off and run some benchmark - then it wouldn't be clear whether
any win is from lower network latencies (from map->reduce) or from faster mappers (if they
were disk bound without compression).

in general i have seen that the map-reduce stack consumes data at a very low rate (it's cpu
bound by the time it gets to 10-20 MBps). (Obviously this is a very loose statement and depends
a lot on what the mappers are doing etc.). so even with 6 disks (say a total of 300MBps streaming
read/write bandwidth)  and 8 cores (say about 200 MBps processing bandwidth) - it would seem
that we would be cpu bound before we would be disk throughput bound. would be nice to get
more accurate numbers along these lines.

> MapTask and ReduceTask should only compress/decompress the final map output file
> --------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2212
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2212
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>    Affects Versions: 0.23.0
>            Reporter: Scott Chen
>            Assignee: Scott Chen
>             Fix For: 0.23.0
>
>
> Currently if we set mapred.map.output.compression.codec
> 1. MapTask will compress every spill, decompress every spill, merge and compress the
final map output file
> 2. ReduceTask will decompress, merge and compress every map output file. And repeat the
compression/decompression every pass.
> This causes all the data being compressed/decompressed many times.
> The reason we need mapred.map.output.compression.codec is for network traffic.
> We should not compress/decompress the data again and again during merge sort.
> We should only compress the final map output file that will be transmitted over the network.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message