Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 62365 invoked from network); 2 Sep 2008 16:24:48 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 2 Sep 2008 16:24:48 -0000 Received: (qmail 47804 invoked by uid 500); 2 Sep 2008 16:24:46 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 47250 invoked by uid 500); 2 Sep 2008 16:24:44 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 47239 invoked by uid 99); 2 Sep 2008 16:24:44 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Sep 2008 09:24:44 -0700 X-ASF-Spam-Status: No, hits=-1998.5 required=10.0 tests=ALL_TRUSTED,WEIRD_PORT X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Sep 2008 16:23:44 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id C3CFA234C1BF for ; Tue, 2 Sep 2008 09:23:44 -0700 (PDT) Message-ID: <2051411172.1220372624787.JavaMail.jira@brutus> Date: Tue, 2 Sep 2008 09:23:44 -0700 (PDT) From: "Pete Wyckoff (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Updated: (HADOOP-4) tool to mount dfs on linux MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pete Wyckoff updated HADOOP-4: ------------------------------ Description: This is a FUSE module for Hadoop's HDFS. It allows one to mount HDFS as a Unix filesystem and optionally export that mount point to other machines. rmdir, mv, mkdir, rm are all supported. just not cp, touch, ..., but actual writes require: https://issues.apache.org/jira/browse/HADOOP-3485 BUILDING: Requirements: 1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you compile it yourself and then modprobe it. Better off with the former option if possible. (Note for now if you use the kernel with fuse included, it doesn't allow you to export this through NFS so be warned. See the FUSE email list for more about this.) 2. FUSE should be installed in /usr/local or FUSE_HOME ant environment variable To build: 1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1 NOTE: for amd64 architecture, libhdfs will not compile unless you edit the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64 (probably the same for others too). -------------------------------------------------------------------------------- CONFIGURING: Look at all the paths in fuse_dfs_wrapper.sh and either correct them or set them in your environment before running. (note for automount and mount as root, you probably cannnot control the environment, so best to set them in the wrapper) INSTALLING: 1. mkdir /mnt/dfs (or wherever you want to mount it) 2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d ; and from another terminal, try ls /mnt/dfs If 2 works, try again dropping the debug mode, i.e., -d (note - common problems are that you don't have libhdfs.so or libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH does not contain hadoop and other required jars.) -------------------------------------------------------------------------------- DEPLOYING: in a root shell do the following: 1. add the following to /etc/fstab - fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse allow_other,rw 0 0 2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will need to probably add this to /sbin and then problems finding the above 3 libraries. Add these using ldconfig. -------------------------------------------------------------------------------- EXPORTING: Add the following to /etc/exports: /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync) NOTE - you cannot export this with a FUSE module built into the kernel - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki. -------------------------------------------------------------------------------- ADVANCED: you may want to ensure certain directories cannot be deleted from the shell until the FS has permissions. You can set this in the build.xml file in src/contrib/fuse-dfs/build.xml was: This is a FUSE module for Hadoop's HDFS. It allows one to mount HDFS as a Unix filesystem and optionally export that mount point to other machines. For now, writes are disabled as this requires Hadoop-1700 - file appends which I guess won't be ready till 0.18 ish ??. rmdir, mv, mkdir, rm are all supported. just not cp, touch, ... BUILDING: Requirements: 1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you compile it yourself and then modprobe it. Better off with the former option if possible. (Note for now if you use the kernel with fuse included, it doesn't allow you to export this through NFS so be warned. See the FUSE email list for more about this.) 2. FUSE should be installed in /usr/local or FUSE_HOME ant environment variable To build: 1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1 NOTE: for amd64 architecture, libhdfs will not compile unless you edit the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64 (probably the same for others too). -------------------------------------------------------------------------------- CONFIGURING: Look at all the paths in fuse_dfs_wrapper.sh and either correct them or set them in your environment before running. (note for automount and mount as root, you probably cannnot control the environment, so best to set them in the wrapper) INSTALLING: 1. mkdir /mnt/dfs (or wherever you want to mount it) 2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d ; and from another terminal, try ls /mnt/dfs If 2 works, try again dropping the debug mode, i.e., -d (note - common problems are that you don't have libhdfs.so or libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH does not contain hadoop and other required jars.) -------------------------------------------------------------------------------- DEPLOYING: in a root shell do the following: 1. add the following to /etc/fstab - fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse allow_other,rw 0 0 2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will need to probably add this to /sbin and then problems finding the above 3 libraries. Add these using ldconfig. -------------------------------------------------------------------------------- EXPORTING: Add the following to /etc/exports: /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync) NOTE - you cannot export this with a FUSE module built into the kernel - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki. -------------------------------------------------------------------------------- ADVANCED: you may want to ensure certain directories cannot be deleted from the shell until the FS has permissions. You can set this in the build.xml file in src/contrib/fuse-dfs/build.xml > tool to mount dfs on linux > -------------------------- > > Key: HADOOP-4 > URL: https://issues.apache.org/jira/browse/HADOOP-4 > Project: Hadoop Core > Issue Type: Improvement > Components: contrib/fuse-dfs > Environment: OSs that support FUSE. Includes Linux, MacOSx, OpenSolaris... http://fuse.sourceforge.net/wiki/index.php/OperatingSystems > Reporter: John Xing > Assignee: Pete Wyckoff > Fix For: 0.18.0 > > Attachments: fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c, fuse_dfs.c, fuse_dfs.c, fuse_dfs.c, fuse_dfs.c, fuse_dfs.sh, fuse_dfs.tar.gz, HADOOP-4.patch, HADOOP-4.patch, HADOOP-4.patch, HADOOP-4.patch, HADOOP-4.patch, HADOOP-4.patch, HADOOP-4.patch, Makefile, patch.txt, patch.txt, patch2.txt, patch3.txt, patch4.txt, patch4.txt, patch4.txt, patch5.txt, patch6.txt, patch6.txt > > > This is a FUSE module for Hadoop's HDFS. > It allows one to mount HDFS as a Unix filesystem and optionally export > that mount point to other machines. > rmdir, mv, mkdir, rm are all supported. just not cp, touch, ..., but actual writes require: https://issues.apache.org/jira/browse/HADOOP-3485 > BUILDING: > Requirements: > 1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you > compile it yourself and then modprobe it. Better off with the > former option if possible. (Note for now if you use the kernel > with fuse included, it doesn't allow you to export this through NFS > so be warned. See the FUSE email list for more about this.) > 2. FUSE should be installed in /usr/local or FUSE_HOME ant > environment variable > To build: > 1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1 > NOTE: for amd64 architecture, libhdfs will not compile unless you edit > the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64 > (probably the same for others too). > -------------------------------------------------------------------------------- > CONFIGURING: > Look at all the paths in fuse_dfs_wrapper.sh and either correct them > or set them in your environment before running. (note for automount > and mount as root, you probably cannnot control the environment, so > best to set them in the wrapper) > INSTALLING: > 1. mkdir /mnt/dfs (or wherever you want to mount it) > 2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d > ; and from another terminal, try ls /mnt/dfs > If 2 works, try again dropping the debug mode, i.e., -d > (note - common problems are that you don't have libhdfs.so or > libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH > does not contain hadoop and other required jars.) > -------------------------------------------------------------------------------- > DEPLOYING: > in a root shell do the following: > 1. add the following to /etc/fstab - > fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse > allow_other,rw 0 0 > 2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will > need to probably add this to /sbin and then problems finding the > above 3 libraries. Add these using ldconfig. > -------------------------------------------------------------------------------- > EXPORTING: > Add the following to /etc/exports: > /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync) > NOTE - you cannot export this with a FUSE module built into the kernel > - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki. > -------------------------------------------------------------------------------- > ADVANCED: > you may want to ensure certain directories cannot be deleted from the > shell until the FS has permissions. You can set this in the build.xml > file in src/contrib/fuse-dfs/build.xml -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.