hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3285) map tasks with node local splits do not always read from local nodes
Date Tue, 22 Apr 2008 00:21:21 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Arun C Murthy updated HADOOP-3285:
----------------------------------

    Status: Open  (was: Patch Available)

Owen, I'm guessing you uploaded the wrong patch:
{code}
@@ -279,12 +279,17 @@
   protected int getBlockIndex(BlockLocation[] blkLocations, 
                               long offset) {
     for (int i = 0 ; i < blkLocations.length; i++) {
+      // is the offset inside this block?
       if ((blkLocations[i].getOffset() <= offset) &&
-        ((blkLocations[i].getOffset() + blkLocations[i].getLength()) >= 
-        offset))
-          return i;
+          (offset <= blkLocations[i].getOffset() + blkLocations[i].getLength())){
+        return i;
+      }

{code}

> map tasks with node local splits do not always read from local nodes
> --------------------------------------------------------------------
>
>                 Key: HADOOP-3285
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3285
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Runping Qi
>            Assignee: Owen O'Malley
>            Priority: Blocker
>             Fix For: 0.17.0
>
>         Attachments: 3285.patch, 3285.patch
>
>
> I ran a simple map/reduce job counting the number of records in the input data.
> The number of reducers was set to 1.
> I did not set the number of mappers. Thus by default, all splits except the last split
of a file contain one dfs block (128MB in my case).
> The web gui indicated that 99% of map tasks were with local splits.
> Thus I expected that most of the dfs reads should have come from the local data nodes.
> However, when I examine the traffic of the ethernet interfaces, 
> I found about 50% traffic of each node were through the loopback interface and other
50% were through the ethernet card!
> Also,  the switch monitoring indicated that a lot of traffic went through the links and
cross racks!
> This indicated that the data locality feature does not work as expected.
> To confirm that, I set the number of map tasks to a very high number so that it forced
the split size down to about 27MB.
> The web gui indicated that 99% of map tasks were with local splits, as expected.
> The ethernet interface monitor showed that almost 100% traffic went through the loopback
interface, as it should be. 
> I found about 50% traffic of each node were through the loopback interface and other
50% were through the ethernet card!
> Also,  the switch monitoring indicated that there were very little traffic through the
links and cross racks.
> This implies that some corner cases are not handled properly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message