hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Hristov <al...@planetalia.com>
Subject Hadoop 0.23.3 and Amazon S3
Date Sat, 29 Sep 2012 15:37:01 GMT
Hi Again

I have problems trying to make Hadoop use S3 or S3N as filesystem.

This is what I have in core-site.xml:

<configuration>
     <property>
       <name>fs.default.name</name>
       <value>s3n://bucketname</value>
     </property>

     <property>
       <name>fs.s3.awsAccessKeyId</name>
       <value> something <value>
     </property>

     <property>
       <name>fs.s3.awsSecretAccessKey</name>
       <value> something </value>
     </property>

     <property>
       <name>fs.s3n.awsAccessKeyId</name>
       <value> something </value>
     </property>

     <property>
       <name>fs.s3n.awsSecretAccessKey</name>
       <value> something </value>
     </property>

      <property>
         <name>hadoop.tmp.dir</name>
         <value>/tmp/hadoop</value>
      </property>
</configuration>

The secret key does not contain any slashes.

When I use s3n://buckename, I get this:

[hadoop@ahristov hadoop]$ hadoop fs -put LICENSE.txt /
put: org.jets3t.service.S3ServiceException: S3 HEAD request failed for 
'/LICENSE.txt' - ResponseCode=403, ResponseMessage=Forbidden

And when I use s3://bucketname, I get this:

[hadoop@ahristov hadoop]$ hadoop fs -put LICENSE.txt /
put: `/': No such file or directory

I couldn't find any logs generated anywhere.

On the other hand, if I use a quick and dirty Java snippet to achieve 
the same, like:

         Configuration conf = new Configuration();
conf.addResource(TestS3.class.getResourceAsStream("/res/core-s3.xml"));
         FileSystem fileSystem = FileSystem.get(conf);
         InputStream in = TestS3.class.getResourceAsStream("/res/test.txt");
         FSDataOutputStream out = fileSystem.create(new Path("/book.txt"));
         byte[] buffer = new byte[10240];
         while (true) {
             int read= in.read(buffer);
             if (read== -1) break;
             out.write(buffer,0,read);
         }
         out.close();
         in.close();

It works both with s3:// and s3n://

Regards

Alexander

Mime
View raw message