Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9D4C49898 for ; Mon, 23 Jul 2012 08:26:57 +0000 (UTC) Received: (qmail 79156 invoked by uid 500); 23 Jul 2012 08:26:56 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 78551 invoked by uid 500); 23 Jul 2012 08:26:50 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 78512 invoked by uid 99); 23 Jul 2012 08:26:48 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 23 Jul 2012 08:26:48 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of alokawi@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qa0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 23 Jul 2012 08:26:42 +0000 Received: by qadz32 with SMTP id z32so1050317qad.14 for ; Mon, 23 Jul 2012 01:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=HnnV7qaVNZfXV3zwfffE4cQgfsKY6aFfscTisa7WU04=; b=sEGhxvrxVi0qkvEROpIssEIOeBKDJD6u23kotPItvHXPXIj9IQ34Ms705xWosXUi6v vrbprw4e8JQ2xDNgaHO8GBZTBSB8erl2OJvLI6QmAGJM2U7X/2nl0acBqNbUFEqQkUxx ttgOjuU0ILYbNh7ppTJnFDa+jLfaGUZHLvZzbD4FrZdwjXCP9JH1/psnPsvLgI4C3YR3 1gWk3zd6MuUYrK3jxJjE2W2gK7tVrxMOGYs1bmn0qTJo5p+tYoFdq2DtcW5XyBuyVnlX cfUhb06VbTzF+CMyH4tKenhMbkha6WZc25GEM17geN9RwH2WO+dN0MGxebXqwejNxrqJ ZoLg== MIME-Version: 1.0 Received: by 10.224.195.199 with SMTP id ed7mr23462944qab.22.1343031981310; Mon, 23 Jul 2012 01:26:21 -0700 (PDT) Received: by 10.229.87.206 with HTTP; Mon, 23 Jul 2012 01:26:21 -0700 (PDT) Date: Mon, 23 Jul 2012 13:56:21 +0530 Message-ID: Subject: problem configuring hadoop with s3 bucket From: Alok Kumar To: hdfs-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf300fb36b2b042704c57b0009 --20cf300fb36b2b042704c57b0009 Content-Type: text/plain; charset=ISO-8859-1 Hello Group, I've hadoop setup locally running. Now I want to use Amazon s3:// as my data store, so i changed like " dfs.data.dir=s3:///hadoop/ " in my hdfs-site.xml, Is it the correct way? I'm getting error : WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: can not create directory: s3:///hadoop 2012-07-23 13:15:06,260 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid. and when i changed like " dfs.data.dir=s3:/// " I got error : ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.IllegalArgumentException: Wrong FS: s3:///, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:393) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:146) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1574) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) Also, When I'm changing fs.default.name=s3://< , Namenode is not coming up with error : ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: (Any way I want to run namenode locally, so I reverted it back to hdfs://localhost:9000 ) Your help is highly appreciated! Thanks -- Alok Kumar --20cf300fb36b2b042704c57b0009 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hello Group,

I've hadoop setup locally running.

Now I wa= nt to use Amazon s3://<mybucket> as my data store,
so i changed li= ke " dfs.data.dir=3Ds3://<mybucket>/hadoop/ " in my hdfs-si= te.xml, Is it the correct way?
I'm getting error :

WARN org.apache.hadoop.hdfs.server.datanode= .DataNode: Invalid directory in dfs.data.dir: can not create directory: s3:= //<mybucket>/hadoop
2012-07-23 13:15:06,260 ERROR org.apache.hadoo= p.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invali= d.

and
when i changed like " dfs.data.dir=3Ds3://<mybucket>= ;/ "
I got error :
=A0ERROR org.apache.hadoop.hdfs.server.datan= ode.DataNode: java.lang.IllegalArgumentException: Wrong FS: s3://<mybuck= et>/, expected: file:///
=A0=A0=A0 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:381)=
=A0=A0=A0 at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLoca= lFileSystem.java:55)
=A0=A0=A0 at org.apache.hadoop.fs.RawLocalFileSyste= m.getFileStatus(RawLocalFileSystem.java:393)
=A0=A0=A0 at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFile= System.java:251)
=A0=A0=A0 at org.apache.hadoop.util.DiskChecker.mkdirsW= ithExistsAndPermissionCheck(DiskChecker.java:146)
=A0=A0=A0 at org.apach= e.hadoop.util.DiskChecker.checkDir(DiskChecker.java:162)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(D= ataNode.java:1574)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.D= ataNode.instantiateDataNode(DataNode.java:1521)
=A0=A0=A0 at org.apache.= hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(Dat= aNode.java:1665)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.Dat= aNode.main(DataNode.java:1682)

Also,
When I'm changing fs.default.name=3Ds3://<<mybucket>= ; , Namenode is not coming up with error : ERROR org.apache.hadoop.hdfs.ser= ver.namenode.NameNode: java.net.BindException: (Any way I want to run namen= ode locally, so I reverted it back to hdfs://localhost:9000 )

Your help is highly appreciated!
Thanks
--
Alok Kumar
--20cf300fb36b2b042704c57b0009--