hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5635) distributed cache doesn't work with other distributed file systems
Date Fri, 01 May 2009 14:34:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12705004#action_12705004
] 

Tom White commented on HADOOP-5635:
-----------------------------------

Andrew,

This looks like a good change to me. Have you thought how to write a unit test for this?

Also, the documentation in DistributedCache should be updated to remove HDFS assumptions.


> distributed cache doesn't work with other distributed file systems
> ------------------------------------------------------------------
>
>                 Key: HADOOP-5635
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5635
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: filecache
>            Reporter: Andrew Hitchcock
>            Priority: Minor
>         Attachments: fix-distributed-cache.patch
>
>
> Currently the DistributedCache does a check to see if the file to be included is an HDFS
URI. If the URI isn't in HDFS, it returns the default filesystem. This prevents using other
distributed file systems -- such as s3, s3n, or kfs  -- with distributed cache. When a user
tries to use one of those filesystems, it reports an error that it can't find the path in
HDFS.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message