hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1014) map/reduce is corrupting data between map and reduce
Date Sat, 17 Feb 2007 08:59:05 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473897
] 

Owen O'Malley commented on HADOOP-1014:
---------------------------------------

I see the cause of the ConcurrentModificationException and likely the corruption problem.
From the JavaDoc on the Collections.synchronizedMap:

| It is imperative that the user manually synchronize on the returned map when iterating over
any of its 
| collection views:
|
|  Map m = Collections.synchronizedMap(new HashMap());
|      ...
|  Set s = m.keySet();  // Needn't be in synchronized block
|      ...
|  synchronized(m) {  // Synchronizing on m, not s!
|      Iterator i = s.iterator(); // Must be in synchronized block
|      while (i.hasNext())
|          foo(i.next());
|  }
| 
|
| Failure to follow this advice may result in non-deterministic behavior.

The InMemoryFileSystem locks the InMemoryFileSystem rather than the synchronizedMap. This
leads to the non-determinism since the other operations (add, remove, etc.) are locking the
synchronizedMap.

> map/reduce is corrupting data between map and reduce
> ----------------------------------------------------
>
>                 Key: HADOOP-1014
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1014
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.11.1
>            Reporter: Owen O'Malley
>         Assigned To: Devaraj Das
>            Priority: Blocker
>             Fix For: 0.11.2
>
>         Attachments: TestMapRed.java, TestMapRed.patch, TestMapRed2.patch, zero-size-inmem-fs.patch
>
>
> It appears that a random data corruption is happening between the map and the reduce.
This looks to be a blocker until it is resolved. There were two relevant messages on hadoop-dev:
> from Mike Smith:
> The map/reduce jobs are not consistent in hadoop 0.11 release and trunk both
> when you rerun the same job. I have observed this inconsistency of the map
> output in different jobs. A simple test to double check is to use hadoop
> 0.11 with nutch trunk.
> from Albert Chern:
> I am having the same problem with my own map reduce jobs.  I have a job
> which requires two pieces of data per key, and just as a sanity check I make
> sure that it gets both in the reducer, but sometimes it doesn't.  What's
> even stranger is, the same tasks that complain about missing key/value pairs
> will maybe fail two or three times, but then succeed on a subsequent try,
> which leads me to believe that the bug has to do with randomization (I'm not
> sure, but I think the map outputs are shuffled?).
> All of my code works perfectly with 0.9, so I went back and just compared
> the sizes of the outputs.  For some jobs, the outputs from 0.11 were
> consistently 4 bytes larger, probably due to changes in SequenceFile.  But
> for others, the output sizes were all over the place.  Some partitions were
> empty, some were correct, and some were missing data.  There seems to be
> something seriously wrong with 0.11, so I suggest you use 0.9.  I've been
> trying to pinpoint the bug but its random nature is really annoying.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message