Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 56269 invoked from network); 29 Jan 2009 12:50:35 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 29 Jan 2009 12:50:35 -0000 Received: (qmail 26977 invoked by uid 500); 29 Jan 2009 12:50:28 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 26930 invoked by uid 500); 29 Jan 2009 12:50:28 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 26919 invoked by uid 99); 29 Jan 2009 12:50:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Jan 2009 04:50:28 -0800 X-ASF-Spam-Status: No, hits=-1.0 required=10.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [129.93.181.2] (HELO mathstat.unl.edu) (129.93.181.2) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Jan 2009 12:50:18 +0000 Received: from [192.168.0.103] (user-0cdvqce.cable.mindspring.com [24.223.233.142]) (authenticated bits=0) by mathstat.unl.edu (8.13.8/8.13.8) with ESMTP id n0TCnb94007934 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT) for ; Thu, 29 Jan 2009 06:49:39 -0600 Message-Id: From: Brian Bockelman To: core-user@hadoop.apache.org In-Reply-To: <49810461.6000108@dcs.gla.ac.uk> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Subject: Re: Hadoop+s3 & fuse-dfs Date: Thu, 29 Jan 2009 06:49:53 -0600 References: <4980ABEE.70805@dcs.gla.ac.uk> <3EF074F3-1F78-4363-AC77-F047F3F8F867@marketsystems.com> <4980B937.8090009@dcs.gla.ac.uk> <9344919B-7180-402C-8A6C-732FD41CBB82@marketsystems.com> <4980F9DD.10006@dcs.gla.ac.uk> <49810461.6000108@dcs.gla.ac.uk> X-Mailer: Apple Mail (2.930.3) X-Virus-Checked: Checked by ClamAV on apache.org Hey all, This is a long-shot, but I've noticed before that libhdfs doesn't load hadoop-site.xml *unless* hadoop-site.xml is in your local directory. As a last try, maybe cd $HADOOP_HOME/conf and try running it from there? Brian On Jan 28, 2009, at 7:20 PM, Craig Macdonald wrote: > Hi Roopa, > > Glad it worked :-) > > Please file JIRA issues against the fuse-dfs / libhdfs components > that would have made it easier to mount the S3 filesystem. > > Craig > > Roopa Sudheendra wrote: >> Thanks, Yes a setup with fuse-dfs and hdfs works fine.I think the >> mount point was bad for whatever reason and was failing with that >> error .I created another mount point for mounting which resolved >> the transport end point error. >> >> Also i had -d option on my command..:) >> >> >> Roopa >> >> >> On Jan 28, 2009, at 6:35 PM, Craig Macdonald wrote: >> >>> Hi Roopa, >>> >>> Firstly, can you get the fuse-dfs working for an instance HDFS? >>> There is also a debug mode for fuse: enable this by adding -d on >>> the command line. >>> >>> C >>> >>> Roopa Sudheendra wrote: >>>> Hey Craig, >>>> I tried the way u suggested..but i get this transport endpoint >>>> not connected. Can i see the logs anywhere? I dont see anything >>>> in /var/log/messages either >>>> looks like it tries to create the file system in hdfs.c but not >>>> sure where it fails. >>>> I have the hadoop home set so i believe it gets the config info. >>>> >>>> any idea? >>>> >>>> Thanks, >>>> Roopa >>>> On Jan 28, 2009, at 1:59 PM, Craig Macdonald wrote: >>>> >>>>> In theory, yes. >>>>> On inspection of libhdfs, which underlies fuse-dfs, I note that: >>>>> >>>>> * libhdfs takes a host and port number as input when connecting, >>>>> but not a scheme (hdfs etc). The easiest option would be to set >>>>> the S3 as your default file system in your hadoop-site.xml, then >>>>> use the host of "default". That should get libhdfs to use the S3 >>>>> file system. i.e. set fuse-dfs to mount dfs://default:0/ and all >>>>> should work as planned. >>>>> >>>>> * libhdfs also casts the FileSystem to a DistributedFileSystem >>>>> for the df command. This would fail in your case. This issue is >>>>> currently being worked on - see HADOOP-4368 >>>>> https://issues.apache.org/jira/browse/HADOOP-4368. >>>>> >>>>> C >>>>> >>>>> >>>>> Roopa Sudheendra wrote: >>>>>> Thanks for the response craig. >>>>>> I looked at fuse-dfs c code and looks like it does not like >>>>>> anything other than "dfs:// " so with the fact that hadoop can >>>>>> connect to S3 file system ..allowing s3 scheme should solve my >>>>>> problem? >>>>>> >>>>>> Roopa >>>>>> >>>>>> On Jan 28, 2009, at 1:03 PM, Craig Macdonald wrote: >>>>>> >>>>>>> Hi Roopa, >>>>>>> >>>>>>> I cant comment on the S3 specifics. However, fuse-dfs is based >>>>>>> on a C interface called libhdfs which allows C programs (such >>>>>>> as fuse-dfs) to connect to the Hadoop file system Java API. >>>>>>> This being the case, fuse-dfs should (theoretically) be able >>>>>>> to connect to any file system that Hadoop can. Your mileage >>>>>>> may vary, but if you find issues, please do report them >>>>>>> through the normal channels. >>>>>>> >>>>>>> Craig >>>>>>> >>>>>>> >>>>>>> Roopa Sudheendra wrote: >>>>>>>> I am experimenting with Hadoop backed by Amazon s3 filesystem >>>>>>>> as one of our backup storage solution. Just the hadoop and >>>>>>>> s3(block based since it overcomes the 5gb limit) so far seems >>>>>>>> to be fine. >>>>>>>> My problem is that i want to mount this filesystem using fuse- >>>>>>>> dfs ( since i don't have to worry about how the file is >>>>>>>> written on the system ) . Since the namenode does not get >>>>>>>> started with s3 backed hadoop system how can i connect fuse- >>>>>>>> dfs to this setup. >>>>>>>> >>>>>>>> Appreciate your help. >>>>>>>> Thanks, >>>>>>>> Roopa >>>>>>> >>>>>> >>>>> >>>> >>> >>