Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 74329 invoked from network); 17 Apr 2008 18:20:00 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 17 Apr 2008 18:20:00 -0000 Received: (qmail 48378 invoked by uid 500); 17 Apr 2008 18:19:57 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 48349 invoked by uid 500); 17 Apr 2008 18:19:57 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 48340 invoked by uid 99); 17 Apr 2008 18:19:57 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2008 11:19:57 -0700 X-ASF-Spam-Status: No, hits=2.6 required=10.0 tests=DNS_FROM_OPENWHOIS,SPF_HELO_PASS,SPF_PASS,WHOIS_MYPRIVREG X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lists@nabble.com designates 216.139.236.158 as permitted sender) Received: from [216.139.236.158] (HELO kuber.nabble.com) (216.139.236.158) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2008 18:19:03 +0000 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1JmYhe-00039p-Ow for core-user@hadoop.apache.org; Thu, 17 Apr 2008 11:19:23 -0700 Message-ID: <16750360.post@talk.nabble.com> Date: Thu, 17 Apr 2008 11:19:22 -0700 (PDT) From: mohamedhafez To: core-user@hadoop.apache.org Subject: Re: Not able to back up to S3 In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: mohamedhafez@google.com References: <16737029.post@talk.nabble.com> X-Virus-Checked: Checked by ClamAV on apache.org If I try to specify the ID and Secret as part of the S3 URL, I get the following error: root@ip-10-251-110-134:~# hadoop distcp /dijkstra.log s3://1W27ZBE2AKDVVFZB9T02:FEQbLfFVh+kF7VdTnw%2fPSqed8Joez+ummWtmmuq5@new_bucket_mohamedhafez/ With failures, global counters are inaccurate; consider running with -i Copy failed: java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively). at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.initialize(Jets3tFileSystemStore.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.initialize(Unknown Source) at org.apache.hadoop.fs.s3.S3FileSystem.initialize(S3FileSystem.java:78) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:672) at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:475) at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:550) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:563) When I put the id and secret in the config file, I get the following error: root@ip-10-251-110-134:~# hadoop distcp /dijkstra.log s3://new_bucket_mohamedhafez/ 08/04/17 18:17:12 WARN httpclient.RestS3Service: Unable to access bucket: null org.jets3t.service.S3ServiceException: Cannot connect to S3 Service with a null path at org.jets3t.service.impl.rest.httpclient.RestS3Service.setupConnection(RestS3Service.java:616) at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestHead(RestS3Service.java:483) at org.jets3t.service.impl.rest.httpclient.RestS3Service.isBucketAccessible(RestS3Service.java:714) at org.jets3t.service.S3Service.createBucket(S3Service.java:499) at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:136) at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.initialize(Jets3tFileSystemStore.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.initialize(Unknown Source) at org.apache.hadoop.fs.s3.S3FileSystem.initialize(S3FileSystem.java:78) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:672) at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:475) at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:550) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:563) With failures, global counters are inaccurate; consider running with -i Copy failed: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: The action Create Bucket cannot be performed with an invalid bucket: S3Bucket [name=null,creationDate=null,owner=null] Metadata={} at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141) at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.initialize(Jets3tFileSystemStore.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.initialize(Unknown Source) at org.apache.hadoop.fs.s3.S3FileSystem.initialize(S3FileSystem.java:78) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:672) at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:475) at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:550) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:563) Caused by: org.jets3t.service.S3ServiceException: The action Create Bucket cannot be performed with an invalid bucket: S3Bucket [name=null,creationDate=null,owner=null] Metadata={} at org.jets3t.service.S3Service.assertValidBucket(S3Service.java:420) at org.jets3t.service.S3Service.createBucket(S3Service.java:653) at org.jets3t.service.S3Service.createBucket(S3Service.java:506) at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:136) ... 17 more I get the same error whether or not I replace / with %2f or not as well. s3sync from the local fs works just fine. Thanks, Mohamed Tom White wrote: > > The bucket doesn't need formatting, and Hadoop creates buckets > automatically if they don't already exist. Can you post the error > message you are getting please. > > Tom > > On 17/04/2008, mohamedhafez wrote: >> >> Hi, I am trying to back up data to S3 from the hdfs using distcp, but it >> fails complaining of a null bucket. The bucket does exist and I can >> access >> it with s3sync from the local filesystem. Can anyone help me with this? >> Does >> the bucket need to be formated in some way first? Is there some command >> in >> Hadoop to create a bucket it can use? >> >> -- >> View this message in context: >> http://www.nabble.com/Not-able-to-back-up-to-S3-tp16737029p16737029.html >> Sent from the Hadoop core-user mailing list archive at Nabble.com. >> >> > > > -- > Blog: http://www.lexemetech.com/ > > -- View this message in context: http://www.nabble.com/Not-able-to-back-up-to-S3-tp16737029p16750360.html Sent from the Hadoop core-user mailing list archive at Nabble.com.