hadoop-pig-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Reed (JIRA)" <j...@apache.org>
Subject [jira] Commented: (PIG-102) Dont copy to DFS if source filesystem marked as shared
Date Tue, 11 Mar 2008 17:36:46 GMT

    [ https://issues.apache.org/jira/browse/PIG-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577526#action_12577526
] 

Benjamin Reed commented on PIG-102:
-----------------------------------

There is no need for copying back. Inputs don't get changed.

I'm not a fan of thinking of it as staging. The fact that we move local files to HDFS is an
implementation detail. 'file:' indicates that the data is in the file system rather than HDFS.

I also like shared because it indicates that it is data shared by all the machines and you
want to take advantage of it. (Note, it doesn't have to be NFS. If you rsync a directory across
all machines, that is going to work as well.)


> Dont copy to DFS if source filesystem marked as shared
> ------------------------------------------------------
>
>                 Key: PIG-102
>                 URL: https://issues.apache.org/jira/browse/PIG-102
>             Project: Pig
>          Issue Type: New Feature
>          Components: impl
>         Environment: Installations with shared folders on all nodes (eg NFS)
>            Reporter: Craig Macdonald
>         Attachments: shared.patch
>
>
> I've been playing with Pig using three setups:
> (a) local
> (b) hadoop mapred with hdfs
> (c) hadoop mapred with file:///path/to/shared/fs as the default file system
> In our local setup, various NFS filesystems are shared between all machines (including
mapred nodes)  eg /users, /local
> I would like Pig to note when input files are in a file:// directory that has been marked
as shared, and hence not copy it to DFS.
> Similarly, the Torque PBS resource manager has a usecp directive, which notes when a
filesystem location is shared between all nodes, (and hence scp is not needed, cp alone can
be used). See http://www.clusterresources.com/wiki/doku.php?id=torque:6.2_nfs_and_other_networked_filesystems
> It would be good to have a configurable setting in Pig that says when a filesystem is
shared, and hence no copying between file:// and hdfs:// is needed.
> An example in our setup might be:
> sharedFS file:///local/
> sharedFS file:///users/
> if commands should be used.
> This command should be used with care. Obviously if you have 1000 nodes all accessing
a shared file in NFS, then it would have been better to "hadoopify" the file.
> The likely area of code to patch is src/org/apache/pig/impl/io/FileLocalizer.java hadoopify(String,
PigContext)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message