Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 68213 invoked from network); 31 Jan 2007 17:28:28 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 31 Jan 2007 17:28:28 -0000 Received: (qmail 32463 invoked by uid 500); 31 Jan 2007 17:28:33 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 32395 invoked by uid 500); 31 Jan 2007 17:28:33 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 32386 invoked by uid 99); 31 Jan 2007 17:28:33 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jan 2007 09:28:33 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jan 2007 09:28:26 -0800 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id CA1387142D0 for ; Wed, 31 Jan 2007 09:28:05 -0800 (PST) Message-ID: <3216506.1170264485825.JavaMail.jira@brutus> Date: Wed, 31 Jan 2007 09:28:05 -0800 (PST) From: "stack@archive.org (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-882) S3FileSystem should retry if there is a communication problem with S3 In-Reply-To: <10832887.1168507047769.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12469117 ] stack@archive.org commented on HADOOP-882: ------------------------------------------ I updated jets3t to the 0.5.0 release. I had to make the below edits. The API has probably changed in other ways but I've not spent the time verifying. Unless someone else is working on a patch that includes the new version of the jets3t lib and complimentary changes to s3 fs, I can give it a go (retries are necessary it seems if you're trying to upload anything more than a few kilobytes). Related, after adding the new lib and making below changes, uploads ('puts') would fail with below complaint. 07/01/31 00:47:13 WARN service.S3Service: Encountered 1 S3 Internal Server error(s), will retry in 50ms put: Input stream is not repeatable as 1048576 bytes have been written, exceeding the available buffer size of 131072 I found the 131072 buffer in jets3t. Turns out the buffer size is configurable. Dropping a jets3t.properties file into ${HADOOP_HOME}/conf directory (so a lookup on CLASSPATH succeeds) with amended s3service.stream-retry-buffer-size got me over the 'put: Input stream...' hump. I set it to the value of dfs.block.size so it could replay a full-block if it had to. Then I noticed that the blocks written to S3 were of 1MB in size. I'm uploading tens of Gs so that made for tens of thousands of blocks. No harm I suppose but I was a little stumped that block size in S3 wasn't the value of dfs.block.size. I found the fs.s3.block.size property in the S3 fs code. Shouldn't this setting be bubbled up into hadoop-default with a default of value of ${dfs.block.size}? (Setting this in my config. made for 64MB S3 blocks). I can add the latter items to the wiki on S3 or can include an jets3t.properties and s3.block.size in patch. What do others think? Index: src/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java =================================================================== --- src/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java (revision 501895) +++ src/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java (working copy) @@ -133,7 +133,7 @@ S3Object object = s3Service.getObject(bucket, key); return object.getDataInputStream(); } catch (S3ServiceException e) { - if (e.getErrorCode().equals("NoSuchKey")) { + if (e.getS3ErrorCode().equals("NoSuchKey")) { return null; } if (e.getCause() instanceof IOException) { @@ -149,7 +149,7 @@ null, byteRangeStart, null); return object.getDataInputStream(); } catch (S3ServiceException e) { - if (e.getErrorCode().equals("NoSuchKey")) { + if (e.getS3ErrorCode().equals("NoSuchKey")) { return null; } if (e.getCause() instanceof IOException) { > S3FileSystem should retry if there is a communication problem with S3 > --------------------------------------------------------------------- > > Key: HADOOP-882 > URL: https://issues.apache.org/jira/browse/HADOOP-882 > Project: Hadoop > Issue Type: Improvement > Components: fs > Affects Versions: 0.10.1 > Reporter: Tom White > Assigned To: Tom White > > File system operations currently fail if there is a communication problem (IOException) with S3. All operations that communicate with S3 should retry a fixed number of times before failing. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.