hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HADOOP-6365) distributed cache doesn't work with HDFS and another file system
Date Tue, 29 Jul 2014 20:00:39 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer resolved HADOOP-6365.
--------------------------------------

    Resolution: Fixed

This looks fixed.

> distributed cache doesn't work with HDFS and another file system
> ----------------------------------------------------------------
>
>                 Key: HADOOP-6365
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6365
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: filecache
>    Affects Versions: 0.20.1
>         Environment: CentOS
>            Reporter: Marc Colosimo
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> This is a continuation of http://issues.apache.org/jira/browse/HADOOP-5635 (JIRA wouldn't
let me edit that one). I found another issue with DistributedCache using something besides
HDFS. In my case I have TWO active file systems, with HDFS being the default file system.
> My fix includes two additional changes (from HADOOP-5635) to get it to work with another
filesystem scheme (plus the changes from the original patch). I've tested this an it works
with my code on HDFS with another filesystem. I have similar changes to mapreduce.filecacheTaskDistributedCacheManager
and TrackerDistributedCacheManager (0.22.0).
> Basically, URI.getPath() is called instead of URI.toString(). toString returns the scheme
plus path which is important in finding the file to copy (getting the file system). Otherwise
it searches the default file system (in this case HDFS) for the file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message