hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: Namenode Exceptions with S3
Date Thu, 10 Jul 2008 11:14:35 GMT
Stuart Sierra wrote:
> I have Hadoop 0.17.1 and an AWS Secret Key that contains a slash ('/').
> 
> With distcp, I found that using the URL format s3://ID:SECRET@BUCKET/
> did not work, even if I encoded the slash as "%2F".  I got
> "org.jets3t.service.S3ServiceException: S3 HEAD request failed.
> ResponseCode=403, ResponseMessage=Forbidden"
> 
> When I put the AWS Secret Key in hadoop-site.xml and wrote the URL as
> s3://BUCKET/ it worked.
> 
> I have periods ('.') in my bucket name, that was not a problem.
> 
> What's weird is that org.apache.hadoop.fs.s3.Jets3tFileSystemStore
> uses java.net.URI, which should take take of unencoding the %2F.


I've been using the Restlet API to work with S3, rather than JetSet; 
seems pretty good (i.e. it has the funny AWS authentication). The big 
problem I've found is that AWS auth requires the caller's clock to be 
close to Amazon's, and on VMWare-hosted images, the clock can drift 
enough for that to start failing.

Mime
View raw message