hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack@archive.org (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-882) S3FileSystem should retry if there is a communication problem with S3
Date Wed, 31 Jan 2007 17:28:05 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12469117
] 

stack@archive.org commented on HADOOP-882:
------------------------------------------

I updated jets3t to the 0.5.0 release.  I had to make the below edits.    The API has probably
changed in other ways but I've not spent the time verifying.  Unless someone else is working
on a patch that includes the new version of the jets3t lib and complimentary changes to s3
fs, I can give it a go (retries are necessary it seems if you're trying to upload anything
more than a few kilobytes).

Related, after adding the new lib and making below changes, uploads ('puts') would fail with
below complaint.

07/01/31 00:47:13 WARN service.S3Service: Encountered 1 S3 Internal Server error(s), will
retry in 50ms
put: Input stream is not repeatable as 1048576 bytes have been written, exceeding the available
buffer size of 131072

I found the 131072 buffer in jets3t.  Turns out the buffer size is configurable.  Dropping
a jets3t.properties file into ${HADOOP_HOME}/conf directory (so a lookup on CLASSPATH succeeds)
with amended s3service.stream-retry-buffer-size got me over the 'put: Input stream...' hump.
 I set it to the value of dfs.block.size so it could replay a full-block if it had to.

Then I noticed that the blocks written to S3 were of 1MB in size.  I'm uploading tens of Gs
so that made for tens of thousands of blocks.  No harm I suppose but I was a little stumped
that block size in S3 wasn't the value of dfs.block.size.  I found the fs.s3.block.size property
in the S3 fs code.  Shouldn't this setting be bubbled up into hadoop-default with a default
of value of ${dfs.block.size}?  (Setting this in my config. made for 64MB S3 blocks).

I can add the latter items to the wiki on S3 or can include an jets3t.properties and s3.block.size
in patch.  What do others think?

Index: src/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java
===================================================================
--- src/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java (revision 501895)
+++ src/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java (working copy)
@@ -133,7 +133,7 @@
       S3Object object = s3Service.getObject(bucket, key);
       return object.getDataInputStream();
     } catch (S3ServiceException e) {
-      if (e.getErrorCode().equals("NoSuchKey")) {
+      if (e.getS3ErrorCode().equals("NoSuchKey")) {
         return null;
       }
       if (e.getCause() instanceof IOException) {
@@ -149,7 +149,7 @@
           null, byteRangeStart, null);
       return object.getDataInputStream();
     } catch (S3ServiceException e) {
-      if (e.getErrorCode().equals("NoSuchKey")) {
+      if (e.getS3ErrorCode().equals("NoSuchKey")) {
         return null;
       }
       if (e.getCause() instanceof IOException) {


> S3FileSystem should retry if there is a communication problem with S3
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-882
>                 URL: https://issues.apache.org/jira/browse/HADOOP-882
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 0.10.1
>            Reporter: Tom White
>         Assigned To: Tom White
>
> File system operations currently fail if there is a communication problem (IOException)
with S3. All operations that communicate with S3 should retry a fixed number of times before
failing.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message