hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Hristov <al...@planetalia.com>
Subject Hadoop 0.23.3 and Amazon S3
Date Sat, 29 Sep 2012 15:37:01 GMT
Hi Again

I have problems trying to make Hadoop use S3 or S3N as filesystem.

This is what I have in core-site.xml:


       <value> something <value>

       <value> something </value>

       <value> something </value>

       <value> something </value>


The secret key does not contain any slashes.

When I use s3n://buckename, I get this:

[hadoop@ahristov hadoop]$ hadoop fs -put LICENSE.txt /
put: org.jets3t.service.S3ServiceException: S3 HEAD request failed for 
'/LICENSE.txt' - ResponseCode=403, ResponseMessage=Forbidden

And when I use s3://bucketname, I get this:

[hadoop@ahristov hadoop]$ hadoop fs -put LICENSE.txt /
put: `/': No such file or directory

I couldn't find any logs generated anywhere.

On the other hand, if I use a quick and dirty Java snippet to achieve 
the same, like:

         Configuration conf = new Configuration();
         FileSystem fileSystem = FileSystem.get(conf);
         InputStream in = TestS3.class.getResourceAsStream("/res/test.txt");
         FSDataOutputStream out = fileSystem.create(new Path("/book.txt"));
         byte[] buffer = new byte[10240];
         while (true) {
             int read= in.read(buffer);
             if (read== -1) break;

It works both with s3:// and s3n://



View raw message