hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2943) Compression for intermediate map output is broken
Date Sat, 08 Mar 2008 21:23:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12576616#action_12576616

Hadoop QA commented on HADOOP-2943:

+1 overall.  Here are the results of testing the latest attachment 
against trunk revision 619744.

    @author +1.  The patch does not contain any @author tags.

    tests included +1.  The patch appears to include 3 new or modified tests.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new javac compiler warnings.

    release audit +1.  The applied patch does not generate any new release audit warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1917/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1917/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1917/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1917/console

This message is automatically generated.

> Compression for intermediate map output is broken
> -------------------------------------------------
>                 Key: HADOOP-2943
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2943
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>             Fix For: 0.17.0
>         Attachments: 2943.patch, 2943.patch, 2943.patch, 2943.patch
> It looks like SequenceFile::RecordCompressWriter and SequenceFile::BlockCompressWriter
weren't updated to use the new serialization added in HADOOP-1986. This causes failures in
the merge when mapred.compress.map.output is true and mapred.map.output.compression.type=BLOCK:
> {noformat}
> java.io.IOException: File is corrupt!
>         at org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:1656)
>         at org.apache.hadoop.io.SequenceFile$Reader.nextRawKey(SequenceFile.java:1969)
>         at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:2985)
>         at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.merge(SequenceFile.java:2785)
>         at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2494)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:654)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:740)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:212)
>         at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2077)
> {noformat}
> mapred.map.output.compression.type=RECORD works for Writables, but should be updated.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message