Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 626E6177FD for ; Mon, 29 Sep 2014 17:21:36 +0000 (UTC) Received: (qmail 16114 invoked by uid 500); 29 Sep 2014 17:21:35 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 16074 invoked by uid 500); 29 Sep 2014 17:21:35 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 16057 invoked by uid 99); 29 Sep 2014 17:21:35 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Sep 2014 17:21:35 +0000 Date: Mon, 29 Sep 2014 17:21:35 +0000 (UTC) From: "Charles Lamb (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-10714?page=3Dcom.atlassi= an.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D14= 151919#comment-14151919 ]=20 Charles Lamb commented on HADOOP-10714: --------------------------------------- [~jyu@cloudera.com], The tests all worked like a champ: {noformat} ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.68 sec -= in org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.28 sec -= in org.apache.hadoop.fs.contract.s3n.TestS3NContractRename Running org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.416 sec = - in org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir Running org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.451 sec= - in org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek Running org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.289 sec = - in org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen Running org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 35.512 sec = - in org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate Running org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.683 sec = - in org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete Results : Tests run: 46, Failures: 0, Errors: 0, Skipped: 3 Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.871 sec = - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRename Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.332 sec = - in org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir Running org.apache.hadoop.fs.contract.s3a.TestS3AContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 47.507 sec = - in org.apache.hadoop.fs.contract.s3a.TestS3AContractCreate Running org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.011 sec = - in org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete Running org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.172 sec= - in org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek Running org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.234 sec = - in org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.172 sec = - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir Results : Tests run: 46, Failures: 0, Errors: 0, Skipped: 3 {noformat} > AmazonS3Client.deleteObjects() need to be limited to 1000 entries per cal= l > -------------------------------------------------------------------------= - > > Key: HADOOP-10714 > URL: https://issues.apache.org/jira/browse/HADOOP-10714 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Affects Versions: 2.5.0 > Reporter: David S. Wang > Assignee: Juan Yu > Priority: Critical > Labels: s3 > Attachments: HADOOP-10714-007.patch, HADOOP-10714-1.patch, HADOOP= -10714.001.patch, HADOOP-10714.002.patch, HADOOP-10714.003.patch, HADOOP-10= 714.004.patch, HADOOP-10714.005.patch, HADOOP-10714.006.patch > > > In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() ne= ed to have the number of entries at 1000 or below. Otherwise we get a Malfo= rmed XML error similar to: > com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS = Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: Malfo= rmedXML, AWS Error Message: The XML you provided was not well-formed or did= not validate against our published schema, S3 Extended Request ID: DOt6C+Y= 84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v > at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClie= nt.java:798) > at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.jav= a:421) > at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) > at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:35= 28) > at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:34= 80) > at com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.= java:1739) > at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388) > at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.jav= a:829) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapsh= ot.java:874) > at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.ja= va:878) > Note that this is mentioned in the AWS documentation: > http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html > "The Multi-Object Delete request contains a list of up to 1000 keys that = you want to delete. In the XML, you provide the object key names, and optio= nally, version IDs if you want to delete a specific version of the object f= rom a versioning-enabled bucket. For each key, Amazon S3=E2=80=A6.=E2=80=9D > Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the = problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)