hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marc Colosimo (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-6365) distributed cache doesn't work with HDFS and another file system
Date Thu, 05 Nov 2009 20:43:32 GMT
distributed cache doesn't work with HDFS and another file system
----------------------------------------------------------------

                 Key: HADOOP-6365
                 URL: https://issues.apache.org/jira/browse/HADOOP-6365
             Project: Hadoop Common
          Issue Type: Bug
          Components: filecache
    Affects Versions: 0.20.1
         Environment: CentOS
            Reporter: Marc Colosimo


This is a continuation of http://issues.apache.org/jira/browse/HADOOP-5635 (JIRA wouldn't
let me edit that one). I found another issue with DistributedCache using something besides
HDFS. In my case I have TWO active file systems, with HDFS being the default file system.

My fix includes two additional changes (from HADOOP-5635) to get it to work with another filesystem
scheme (plus the changes from the original patch). I've tested this an it works with my code
on HDFS with another filesystem. I have similar changes to mapreduce.filecacheTaskDistributedCacheManager
and TrackerDistributedCacheManager (0.22.0).

Basically, URI.getPath() is called instead of URI.toString(). toString returns the scheme
plus path which is important in finding the file to copy (getting the file system). Otherwise
it searches the default file system (in this case HDFS) for the file.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message