hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
Date Sat, 20 Sep 2014 17:08:34 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14142112#comment-14142112
] 

Steve Loughran commented on HADOOP-10714:
-----------------------------------------

Style-guide wise, I have been [[writing one during test runs|https://github.com/steveloughran/formality/blob/master/styleguide/styleguide.md]].


h3. new tests in general

If there's extra basic things to test. e.g. {{TestS3AFileSystemBasicOps}}, there's no reason
why they can't be pulled up to the core.



h3. scale

{{testOpenCreate()}} relies on the tester having good upload bandwidth. It needs to be configurable
or optional. We already do some of this for seek() testing, with {{ContractOptions}} having
an option {{TEST_RANDOM_SEEK_COUNT}} for test performance. (that one because the cost of seeking
is so high on remote object stores over HTTPS). Ideally I'd like to see some abstract filesystem
scalability contract test which could then be implemented by S3a and others (file & hdfs
at the very least) so that its a definition of what big files should do. 


Looking at the openstack tests, {{SwiftScaleTestBase}} is a basis for scaled FS tests, with
one specific test, {{TestWriteManySmallFiles}} looking at the cost of creating and deleting
lots of files. If we had something like that in the abstract tests, we'd have caught the problem
you've seen : scale problems with many thousands of small files. 


h3. test helpers

You should look at {{ContractTestUtils}} for test helpers, replacing thing like {{assertTrue(fs.exists(path("/tests3a/c/a/")));}}
with {{ContractTestUtils.assertPathExists()}}, which will produce a meaningful exception and
a listing of the parent dir on a failure. You could even put {{testReceivedData}} in there
too, once renamed to something like {{verifyReceivedData}}. There's already methods there
that can do exactly what you've rewritten, though they won't work to multi-GB files as they
save to byte arrays first. Your new verification code can be the start of some more scalability
tests.

Same for the other methods —but fix {{createAndReadFileTest}} to always close its streams.

h3. rename()

Finally, the actual source code change: rename changes. The correct behaviour of {{FileSystem.rename()}}
is something I'm not confident I understand, and I'm not confident that others do either.
More precisely: Posix rename is fundamentally misunderstood and {{DFS.rename()}} diverges
from it anyway.

If rename() problems weren't picked up in the previous tests, it means that not only does
S3A need the tests you've added, {{AbstractContractRenameTest}} needs more tests too.


h2. Summary

Looking at my comments, My key point is that you've done some really good scale tests here,
as well as tests to validate S3A behaviour. The scale tests can at some point be applied to
others, so maybe design it that way from the outset. Add under S3A /test a test case based
on the abstract contract class tree something with the scale tests

{code}
TestS3aContractScale extends AbstractFSContractTestBase {

    // scale tests, using/extending ContractTestUtils

}
{code}

that way there's no need for you to add new tests to all the other filesystems here, and spend
the time getting them to work. For now we'll just trust HDFS to handle files >5GB and more
than 1000 entries in a directory.

The rename tests are a different matter. If there's something up with S3a.rename(), and it
wasn't found, then the root tests are lacking. Please add them to {{AbstractContractRenameTest}}
and see what breaks. If it is HDFS, then the new tests are wrong. If they work on HDFS but
fail on other things (swift:// s3n://) then that's something we need to know about.


> AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-10714
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10714
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.5.0
>            Reporter: David S. Wang
>            Assignee: Juan Yu
>            Priority: Critical
>              Labels: s3
>         Attachments: HADOOP-10714-1.patch, HADOOP-10714.001.patch, HADOOP-10714.002.patch
>
>
> In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need to have the
number of entries at 1000 or below. Otherwise we get a Malformed XML error similar to:
> com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon
S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: MalformedXML, AWS Error Message: The
XML you provided was not well-formed or did not validate against our published schema, S3
Extended Request ID: DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
> at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
> at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
> at com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
> at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
> at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
> Note that this is mentioned in the AWS documentation:
> http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
> "The Multi-Object Delete request contains a list of up to 1000 keys that you want to
delete. In the XML, you provide the object key names, and optionally, version IDs if you want
to delete a specific version of the object from a versioning-enabled bucket. For each key,
Amazon S3….”
> Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message