Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A783EB2E3 for ; Wed, 18 Jan 2012 06:45:27 +0000 (UTC) Received: (qmail 62376 invoked by uid 500); 18 Jan 2012 06:45:23 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 62064 invoked by uid 500); 18 Jan 2012 06:45:16 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 62054 invoked by uid 99); 18 Jan 2012 06:45:13 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Jan 2012 06:45:13 +0000 X-ASF-Spam-Status: No, hits=4.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [209.85.214.176] (HELO mail-tul01m020-f176.google.com) (209.85.214.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Jan 2012 06:45:07 +0000 Received: by obcwp18 with SMTP id wp18so6336463obc.35 for ; Tue, 17 Jan 2012 22:44:46 -0800 (PST) MIME-Version: 1.0 Received: by 10.182.48.100 with SMTP id k4mr14970881obn.55.1326869086598; Tue, 17 Jan 2012 22:44:46 -0800 (PST) Received: by 10.182.116.68 with HTTP; Tue, 17 Jan 2012 22:44:46 -0800 (PST) X-Originating-IP: [98.198.171.77] In-Reply-To: References: Date: Wed, 18 Jan 2012 00:44:46 -0600 Message-ID: Subject: Re: Using S3 instead of HDFS From: Mark Kerzner To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d0447897391f78904b6c7c80c --f46d0447897391f78904b6c7c80c Content-Type: text/plain; charset=ISO-8859-1 Well, here is my error message Starting Hadoop namenode daemon: starting namenode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-namenode-ip-10-126-11-26.out ERROR. Could not start Hadoop namenode daemon Starting Hadoop secondarynamenode daemon: starting secondarynamenode, logging to /usr/lib/hadoop-0.20/logs/hadoop-hadoop-secondarynamenode-ip-10-126-11-26.out Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.default.name): s3n://myname.testdata is not of scheme 'hdfs'. at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:224) at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:209) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:182) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:150) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:624) ERROR. Could not start Hadoop secondarynamenode daemon And, if I don't need to start the NameNode, then where do I give the S3 credentials? Thank you, Mark On Wed, Jan 18, 2012 at 12:36 AM, Harsh J wrote: > Hey Mark, > > What is the exact trouble you run into? What do the error messages > indicate? > > This should be definitive enough I think: > http://wiki.apache.org/hadoop/AmazonS3 > > On Wed, Jan 18, 2012 at 11:55 AM, Mark Kerzner > wrote: > > Hi, > > > > whatever I do, I can't make it work, that is, I cannot use > > > > s3://host > > > > or s3n://host > > > > as a replacement for HDFS while runnings EC2 cluster. I change the > settings > > in the core-file.xml, in hdfs-site.xml, and start hadoop services, and it > > fails with error messages. > > > > Is there a place where this is clearly described? > > > > Thank you so much. > > > > Mark > > > > -- > Harsh J > Customer Ops. Engineer, Cloudera > --f46d0447897391f78904b6c7c80c--