Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 88615 invoked from network); 28 Apr 2008 11:30:44 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 28 Apr 2008 11:30:44 -0000 Received: (qmail 91676 invoked by uid 500); 28 Apr 2008 11:30:43 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 91645 invoked by uid 500); 28 Apr 2008 11:30:42 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 91634 invoked by uid 99); 28 Apr 2008 11:30:42 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Apr 2008 04:30:42 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Apr 2008 11:29:58 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id BBF66234C123 for ; Mon, 28 Apr 2008 04:26:55 -0700 (PDT) Message-ID: <611376973.1209382015768.JavaMail.jira@brutus> Date: Mon, 28 Apr 2008 04:26:55 -0700 (PDT) From: "Craig Macdonald (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Issue Comment Edited: (HADOOP-4) tool to mount dfs on linux MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12592602#action_12592602 ] craigm edited comment on HADOOP-4 at 4/28/08 4:26 AM: --------------------------------------------------------------- I have some minor issues. I was working on compiling on Friday afternoon, but Doug beat me to it with identical comments ;-) * I see that the ant script calls the make file, with the correct env vars overridden. In that case, do we need the bootstrap, automake, configure etc? Why not a simpler build system like that used by libhdfs etc? The makefile contains references to the system that configure was run on. * build-contrib.xml in patch3.txt has the wrong path. * I think it would be better if the built fuse_dfs module was placed in $HADOOP_HOME/contrib/fuse-dfs, in a similar manner to libhdfs, etc * If we're keeping configure et al: ** README.BUILD refers to bootstrap.sh, while the script is plain_bootstrap.sh ** the configure script doesnt identify the JARCH for non-64bit platforms - see config.log for my platform {code} configure:1377: checking target system type configure:1391: result: i686-pc-linux-gnu {code} but JARCH is unset, should be i386. I presume this works ok from the ant script? * fuse_dfs_wrapper.sh has some issues - perhaps you could base this more closely on the fuse_dfs.sh I attached previously to this JIRA? ** various variables have to be written in, eg JAVA_HOME, OS_ARCH has already been identified in configure/ant?, why cant they be set in the script? ** fuse_dfs should be called with "$@" instead of $1 $2 ** the classpath is hard-coded - why can't you identify all the classpath automatically from the HADOOP_HOME path? Will test new build in due course. Maurizio - see http://en.wikipedia.org/wiki/Patch_(Unix) and apply the patch to a recent release of Hadoop. It requires FUSE. was (Author: craigm): I have some minor issues. I was working on compiling on Friday afternoon, but Doug beat me to it with identical comments ;-) * I see that the ant script calls the make file, with the correct env vars overridden. In that case, do we need the bootstrap, automake, configure etc? Why not a simpler build system like that used by libhdfs etc? The makefile contains references to the system that configure was run on. * I think it would be better if the built fuse_dfs module was placed in $HADOOP_HOME/contrib/fuse-dfs, in a similar manner to libhdfs, etc * If we're keeping configure et al: ** README.BUILD refers to bootstrap.sh, while the script is plain_bootstrap.sh ** the configure script doesnt identify the JARCH for non-64bit platforms - see config.log for my platform {code} configure:1377: checking target system type configure:1391: result: i686-pc-linux-gnu {code} but JARCH is unset, should be i386. I presume this works ok from the ant script? * fuse_dfs_wrapper.sh has some issues - perhaps you could base this more closely on the fuse_dfs.sh I attached previously to this JIRA? ** various variables have to be written in, eg JAVA_HOME, OS_ARCH has already been identified in configure/ant?, why cant they be set in the script? ** fuse_dfs should be called with "$@" instead of $1 $2 ** the classpath is hard-coded - why can't you identify all the classpath automatically from the HADOOP_HOME path? Will test new build in due course. Maurizio - see http://en.wikipedia.org/wiki/Patch_(Unix) and apply the patch to a recent release of Hadoop. It requires FUSE. > tool to mount dfs on linux > -------------------------- > > Key: HADOOP-4 > URL: https://issues.apache.org/jira/browse/HADOOP-4 > Project: Hadoop Core > Issue Type: Improvement > Components: fs > Environment: OSs that support FUSE. Includes Linux, MacOSx, OpenSolaris... http://fuse.sourceforge.net/wiki/index.php/OperatingSystems > Reporter: John Xing > Assignee: Pete Wyckoff > Attachments: fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-dfs.tar.gz, fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c, fuse_dfs.c, fuse_dfs.c, fuse_dfs.c, fuse_dfs.c, fuse_dfs.sh, fuse_dfs.tar.gz, Makefile, patch.txt, patch.txt, patch2.txt, patch3.txt > > > tool to mount dfs on Unix or any OS that supports FUSE -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.