hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1014) map/reduce is corrupting data between map and reduce
Date Tue, 13 Feb 2007 05:04:05 GMT
map/reduce is corrupting data between map and reduce

                 Key: HADOOP-1014
                 URL: https://issues.apache.org/jira/browse/HADOOP-1014
             Project: Hadoop
          Issue Type: Bug
          Components: mapred
    Affects Versions: 0.11.1
            Reporter: Owen O'Malley
         Assigned To: Devaraj Das
            Priority: Blocker

It appears that a random data corruption is happening between the map and the reduce. This
looks to be a blocker until it is resolved. There were two relevant messages on hadoop-dev:

from Mike Smith:

The map/reduce jobs are not consistent in hadoop 0.11 release and trunk both
when you rerun the same job. I have observed this inconsistency of the map
output in different jobs. A simple test to double check is to use hadoop
0.11 with nutch trunk.

from Albert Chern:

I am having the same problem with my own map reduce jobs.  I have a job
which requires two pieces of data per key, and just as a sanity check I make
sure that it gets both in the reducer, but sometimes it doesn't.  What's
even stranger is, the same tasks that complain about missing key/value pairs
will maybe fail two or three times, but then succeed on a subsequent try,
which leads me to believe that the bug has to do with randomization (I'm not
sure, but I think the map outputs are shuffled?).

All of my code works perfectly with 0.9, so I went back and just compared
the sizes of the outputs.  For some jobs, the outputs from 0.11 were
consistently 4 bytes larger, probably due to changes in SequenceFile.  But
for others, the output sizes were all over the place.  Some partitions were
empty, some were correct, and some were missing data.  There seems to be
something seriously wrong with 0.11, so I suggest you use 0.9.  I've been
trying to pinpoint the bug but its random nature is really annoying.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message