Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 26041 invoked from network); 28 Jan 2009 23:45:28 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 28 Jan 2009 23:45:28 -0000 Received: (qmail 21199 invoked by uid 500); 28 Jan 2009 23:45:20 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 21168 invoked by uid 500); 28 Jan 2009 23:45:20 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 21157 invoked by uid 99); 28 Jan 2009 23:45:19 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Jan 2009 15:45:19 -0800 X-ASF-Spam-Status: No, hits=1.2 required=10.0 tests=SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [209.85.162.177] (HELO el-out-1112.google.com) (209.85.162.177) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Jan 2009 23:45:11 +0000 Received: by el-out-1112.google.com with SMTP id m34so1730362ele.20 for ; Wed, 28 Jan 2009 15:44:49 -0800 (PST) Received: by 10.151.150.13 with SMTP id c13mr1509537ybo.178.1233186289797; Wed, 28 Jan 2009 15:44:49 -0800 (PST) Received: from ?192.168.1.6? ([198.80.155.7]) by mx.google.com with ESMTPS id n26sm5859859ele.2.2009.01.28.15.44.48 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 28 Jan 2009 15:44:49 -0800 (PST) Message-Id: <9344919B-7180-402C-8A6C-732FD41CBB82@marketsystems.com> From: Roopa Sudheendra To: core-user@hadoop.apache.org In-Reply-To: <4980B937.8090009@dcs.gla.ac.uk> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v930.3) Subject: Re: Hadoop+s3 & fuse-dfs Date: Wed, 28 Jan 2009 17:45:38 -0600 References: <4980ABEE.70805@dcs.gla.ac.uk> <3EF074F3-1F78-4363-AC77-F047F3F8F867@marketsystems.com> <4980B937.8090009@dcs.gla.ac.uk> X-Mailer: Apple Mail (2.930.3) X-Virus-Checked: Checked by ClamAV on apache.org Hey Craig, I tried the way u suggested..but i get this transport endpoint not connected. Can i see the logs anywhere? I dont see anything in /var/ log/messages either looks like it tries to create the file system in hdfs.c but not sure where it fails. I have the hadoop home set so i believe it gets the config info. any idea? Thanks, Roopa On Jan 28, 2009, at 1:59 PM, Craig Macdonald wrote: > In theory, yes. > On inspection of libhdfs, which underlies fuse-dfs, I note that: > > * libhdfs takes a host and port number as input when connecting, but > not a scheme (hdfs etc). The easiest option would be to set the S3 > as your default file system in your hadoop-site.xml, then use the > host of "default". That should get libhdfs to use the S3 file > system. i.e. set fuse-dfs to mount dfs://default:0/ and all should > work as planned. > > * libhdfs also casts the FileSystem to a DistributedFileSystem for > the df command. This would fail in your case. This issue is > currently being worked on - see HADOOP-4368 > https://issues.apache.org/jira/browse/HADOOP-4368. > > C > > > Roopa Sudheendra wrote: >> Thanks for the response craig. >> I looked at fuse-dfs c code and looks like it does not like >> anything other than "dfs:// " so with the fact that hadoop can >> connect to S3 file system ..allowing s3 scheme should solve my >> problem? >> >> Roopa >> >> On Jan 28, 2009, at 1:03 PM, Craig Macdonald wrote: >> >>> Hi Roopa, >>> >>> I cant comment on the S3 specifics. However, fuse-dfs is based on >>> a C interface called libhdfs which allows C programs (such as fuse- >>> dfs) to connect to the Hadoop file system Java API. This being the >>> case, fuse-dfs should (theoretically) be able to connect to any >>> file system that Hadoop can. Your mileage may vary, but if you >>> find issues, please do report them through the normal channels. >>> >>> Craig >>> >>> >>> Roopa Sudheendra wrote: >>>> I am experimenting with Hadoop backed by Amazon s3 filesystem as >>>> one of our backup storage solution. Just the hadoop and s3(block >>>> based since it overcomes the 5gb limit) so far seems to be fine. >>>> My problem is that i want to mount this filesystem using fuse-dfs >>>> ( since i don't have to worry about how the file is written on >>>> the system ) . Since the namenode does not get started with s3 >>>> backed hadoop system how can i connect fuse-dfs to this setup. >>>> >>>> Appreciate your help. >>>> Thanks, >>>> Roopa >>> >> >