hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mingliang Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
Date Thu, 01 Dec 2016 05:27:58 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710947#comment-15710947
] 

Mingliang Liu commented on HADOOP-13449:
----------------------------------------

Sorry for late reply. Thank you [~fabbri] very much for running integration tests and analyze
the failure. I can reproduce the unit test failure {{TestS3AGetFileStatus#testNotFound}}.
I can also reproduce the integration failures on US-standard region. I'll work on them this
tomorrow. Thanks for taking care of {{ITestS3AFileSystemContract}}.
{code}
-------------------------------------------------------
 T E S T S
-------------------------------------------------------

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractGetFileStatus
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractMkdir
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractOpen
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.946 sec - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractMkdir
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.332 sec - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractOpen
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.372 sec - in org.apache.hadoop.fs.contract.s3n.ITestS3NContractCreate
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractDelete
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.455 sec - in org.apache.hadoop.fs.contract.s3n.ITestS3NContractDelete
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.375 sec - in org.apache.hadoop.fs.contract.s3n.ITestS3NContractMkdir
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractOpen
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.406 sec - in org.apache.hadoop.fs.contract.s3n.ITestS3NContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.478 sec - in org.apache.hadoop.fs.contract.s3n.ITestS3NContractOpen
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractSeek
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.313 sec - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
Tests run: 18, Failures: 0, Errors: 0, Skipped: 18, Time elapsed: 0.655 sec - in org.apache.hadoop.fs.contract.s3n.ITestS3NContractSeek
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextCreateMkdir
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.987 sec - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 73.829 sec - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextUtil
Tests run: 8, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 75.878 sec <<< FAILURE!
- in org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete
testDeleteNonEmptyDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete)
 Time elapsed: 28.759 sec  <<< FAILURE!
java.lang.AssertionError: non recursive delete should have raised an exception, but completed
with exit code true
	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.fs.contract.AbstractContractDeleteTest.testDeleteNonEmptyDirNonRecursive(AbstractContractDeleteTest.java:78)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

testDeleteNonEmptyDirRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete) 
Time elapsed: 4.349 sec  <<< FAILURE!
java.lang.AssertionError: Deleted file: unexpectedly found s3a://mliu-test-aws-s3a/fork-2/test/testDeleteNonEmptyDirNonRecursive
as  S3AFileStatus{path=s3a://mliu-test-aws-s3a/fork-2/test/testDeleteNonEmptyDirNonRecursive;
isDirectory=true; modification_time=0; access_time=0; owner=mliu; group=mliu; permission=rwxrwxrwx;
isSymlink=false} isEmptyDirectory=false
	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:754)
	at org.apache.hadoop.fs.contract.ContractTestUtils.assertDeleted(ContractTestUtils.java:612)
	at org.apache.hadoop.fs.contract.ContractTestUtils.assertDeleted(ContractTestUtils.java:590)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertDeleted(AbstractFSContractTestBase.java:349)
	at org.apache.hadoop.fs.contract.AbstractContractDeleteTest.testDeleteNonEmptyDirRecursive(AbstractContractDeleteTest.java:94)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

Running org.apache.hadoop.fs.s3a.ITestBlockingThreadPoolExecutorService
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.869 sec - in org.apache.hadoop.fs.s3a.ITestBlockingThreadPoolExecutorService
Running org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider
Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.613 sec <<< FAILURE!
- in org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider
testAnonymousProvider(org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider)  Time elapsed:
0.91 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: initializing  on s3a://landsat-pds/scene_list.gz:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Request is missing Authentication
Token (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: MissingAuthenticationTokenException;
Request ID: NS80UK0G6OKHI6IR7KCIV1VRONVV4KQNSO5AEMVJF66Q9ASUAAJG): Request is missing Authentication
Token (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: MissingAuthenticationTokenException;
Request ID: NS80UK0G6OKHI6IR7KCIV1VRONVV4KQNSO5AEMVJF66Q9ASUAAJG)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1529)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1167)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1722)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1698)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:743)
	at com.amazonaws.services.dynamodbv2.document.DynamoDB.createTable(DynamoDB.java:96)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.createTable(DynamoDBMetadataStore.java:413)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:187)
	at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:85)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3246)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3295)
	at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3269)
	at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
	at org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider.testAnonymousProvider(ITestS3AAWSCredentialsProvider.java:133)

testBadCredentials(org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider)  Time elapsed:
0.82 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: initializing  on s3a://mliu-test-aws-s3a/:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The security token included
in the request is invalid. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: UnrecognizedClientException;
Request ID: UUBHUTU01895I8AH4CGS72R24FVV4KQNSO5AEMVJF66Q9ASUAAJG): The security token included
in the request is invalid. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: UnrecognizedClientException;
Request ID: UUBHUTU01895I8AH4CGS72R24FVV4KQNSO5AEMVJF66Q9ASUAAJG)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1529)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1167)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1722)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1698)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:743)
	at com.amazonaws.services.dynamodbv2.document.DynamoDB.createTable(DynamoDB.java:96)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.createTable(DynamoDBMetadataStore.java:413)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:187)
	at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:85)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
	at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:103)
	at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:63)
	at org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider.createFailingFS(ITestS3AAWSCredentialsProvider.java:76)
	at org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider.testBadCredentials(ITestS3AAWSCredentialsProvider.java:102)

Running org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.847 sec - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextUtil
Running org.apache.hadoop.fs.s3a.ITestS3ABlockOutputByteBuffer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.233 sec - in org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray
Running org.apache.hadoop.fs.s3a.ITestS3ABlockOutputDisk
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.82 sec - in org.apache.hadoop.fs.s3a.ITestS3ABlockOutputByteBuffer
Running org.apache.hadoop.fs.s3a.ITestS3ABlocksize
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.371 sec - in org.apache.hadoop.fs.s3a.ITestS3ABlocksize
Running org.apache.hadoop.fs.s3a.ITestS3AConfiguration
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.955 sec - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextCreateMkdir
Running org.apache.hadoop.fs.s3a.ITestS3ACredentialsInURL
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.991 sec - in org.apache.hadoop.fs.s3a.ITestS3ABlockOutputDisk
Running org.apache.hadoop.fs.s3a.ITestS3AEncryption
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.518 sec <<< FAILURE!
- in org.apache.hadoop.fs.s3a.ITestS3ACredentialsInURL
testInvalidCredentialsFail(org.apache.hadoop.fs.s3a.ITestS3ACredentialsInURL)  Time elapsed:
0.95 sec  <<< FAILURE!
java.lang.AssertionError: Expected an AccessDeniedException, got S3AFileStatus{path=s3a://mliu-test-aws-s3a/;
isDirectory=true; modification_time=0; access_time=0; owner=mliu; group=mliu; permission=rwxrwxrwx;
isSymlink=false} isEmptyDirectory=false
	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.fs.s3a.ITestS3ACredentialsInURL.testInvalidCredentialsFail(ITestS3ACredentialsInURL.java:130)

Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmPropagation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.72 sec - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmPropagation
Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionBlockOutputStream
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 142.197 sec - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek
Running org.apache.hadoop.fs.s3a.ITestS3AFailureHandling
Tests run: 19, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 16.14 sec <<< FAILURE!
- in org.apache.hadoop.fs.s3a.ITestS3AConfiguration
testUsernameFromUGI(org.apache.hadoop.fs.s3a.ITestS3AConfiguration)  Time elapsed: 0.923 sec
 <<< FAILURE!
org.junit.ComparisonFailure: owner in S3AFileStatus{path=s3a://mliu-test-aws-s3a/; isDirectory=true;
modification_time=0; access_time=0; owner=mliu; group=mliu; permission=rwxrwxrwx; isSymlink=false}
isEmptyDirectory=false expected:<[alice]> but was:<[mliu]>
	at org.junit.Assert.assertEquals(Assert.java:115)
	at org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testUsernameFromUGI(ITestS3AConfiguration.java:481)

Running org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.446 sec - in org.apache.hadoop.fs.s3a.ITestS3AFailureHandling
Running org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.437 sec - in org.apache.hadoop.fs.s3a.ITestS3AEncryption
Running org.apache.hadoop.fs.s3a.ITestS3AMiscOperations
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.966 sec - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionBlockOutputStream
Running org.apache.hadoop.fs.s3a.ITestS3ATemporaryCredentials
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.832 sec - in org.apache.hadoop.fs.s3a.ITestS3AMiscOperations
Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 52.913 sec <<< FAILURE!
- in org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost
testFakeDirectoryDeletion(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost)  Time elapsed:
19.243 sec  <<< FAILURE!
java.lang.AssertionError: after rename(srcFilePath, destFilePath): directories_created expected:<1>
but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:431)
	at org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testFakeDirectoryDeletion(ITestS3AFileOperationCost.java:254)

testCostOfGetFileStatusOnNonEmptyDir(org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost) 
Time elapsed: 5.892 sec  <<< FAILURE!
java.lang.AssertionError: FileStatus says directory isempty: S3AFileStatus{path=s3a://mliu-test-aws-s3a/fork-6/test/empty;
isDirectory=true; modification_time=0; access_time=0; owner=mliu; group=mliu; permission=rwxrwxrwx;
isSymlink=false} isEmptyDirectory=true
ls s3a://mliu-test-aws-s3a/fork-6/test/empty [00] S3AFileStatus{path=s3a://mliu-test-aws-s3a/fork-6/test/empty/simple.txt;
isDirectory=false; length=0; replication=1; blocksize=33554432; modification_time=1480569669039;
access_time=0; owner=mliu; group=mliu; permission=rw-rw-rw-; isSymlink=false} isEmptyDirectory=false

S3AFileSystem{uri=s3a://mliu-test-aws-s3a, workingDir=s3a://mliu-test-aws-s3a/user/mliu, inputPolicy=normal,
partSize=104857600, enableMultiObjectsDelete=true, maxKeys=5000, readAhead=65536, blockSize=33554432,
multiPartThreshold=2147483647, executor=BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=25,
available=25, waiting=0}, activeCount=0}, statistics {10240 bytes read, 10240 bytes written,
26 read ops, 0 large read ops, 66 write ops}, metrics {{Context=S3AFileSystem} {FileSystemId=66ae0ffd-8746-4911-88df-d73e3b217dab-mliu-test-aws-s3a}
{fsURI=s3a://mliu-test-aws-s3a/} {files_created=1} {files_copied=0} {files_copied_bytes=0}
{files_deleted=0} {fake_directories_deleted=3} {directories_created=2} {directories_deleted=0}
{ignored_errors=0} {op_copy_from_local_file=0} {op_exists=0} {op_get_file_status=6} {op_glob_status=0}
{op_is_directory=0} {op_is_file=0} {op_list_files=0} {op_list_located_status=0} {op_list_status=0}
{op_mkdirs=2} {op_rename=0} {object_copy_requests=0} {object_delete_requests=1} {object_list_requests=3}
{object_continue_list_requests=0} {object_metadata_requests=6} {object_multipart_aborted=0}
{object_put_bytes=0} {object_put_requests=3} {object_put_requests_completed=3} {stream_write_failures=0}
{stream_write_block_uploads=0} {stream_write_block_uploads_committed=0} {stream_write_block_uploads_aborted=0}
{stream_write_total_time=0} {stream_write_total_data=0} {object_put_requests_active=0} {object_put_bytes_pending=0}
{stream_write_block_uploads_active=0} {stream_write_block_uploads_pending=0} {stream_write_block_uploads_data_pending=0}
{stream_read_fully_operations=0} {stream_bytes_skipped_on_seek=0} {stream_bytes_backwards_on_seek=0}
{stream_bytes_read=0} {streamOpened=0} {stream_read_operations_incomplete=0} {stream_bytes_discarded_in_abort=0}
{stream_close_operations=0} {stream_read_operations=0} {stream_aborted=0} {stream_forward_seek_operations=0}
{stream_backward_seek_operations=0} {streamClosed=0} {stream_seek_operations=0} {stream_bytes_read_in_close=0}
{stream_read_exceptions=0} }}
	at org.junit.Assert.fail(Assert.java:88)
	at org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost.testCostOfGetFileStatusOnNonEmptyDir(ITestS3AFileOperationCost.java:139)

Running org.apache.hadoop.fs.s3a.scale.ITestS3ADeleteFilesOneByOne
Running org.apache.hadoop.fs.s3a.ITestS3ATestUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.19 sec - in org.apache.hadoop.fs.s3a.ITestS3ATestUtils
Running org.apache.hadoop.fs.s3a.scale.ITestS3ADeleteManyFiles
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 6.948 sec - in org.apache.hadoop.fs.s3a.scale.ITestS3ADeleteFilesOneByOne
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 6.651 sec - in org.apache.hadoop.fs.s3a.scale.ITestS3ADeleteManyFiles
Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.942 sec - in org.apache.hadoop.fs.s3a.ITestS3ATemporaryCredentials
Running org.apache.hadoop.fs.s3a.yarn.ITestS3A
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.445 sec - in org.apache.hadoop.fs.s3a.yarn.ITestS3A
Running org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 19.071 sec - in org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
{code}

For {{MockS3ClientFactory}}, my idea was that having {{createDynamoDBClient}} over DynamoDBLocal
for unit test will help us find bugs easier and earlier than mocked objects. For integration
tests, it will go to AWS DynamoDB service as expected. If I can not find an easy approach
now, we can address this along with [HADOOP-13589].

By the way, when I run the integration tests myself, the s3n tests were included by default.
Is there a way to exclude it?

> S3Guard: Implement DynamoDBMetadataStore.
> -----------------------------------------
>
>                 Key: HADOOP-13449
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13449
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Chris Nauroth
>            Assignee: Mingliang Liu
>         Attachments: HADOOP-13449-HADOOP-13345.000.patch, HADOOP-13449-HADOOP-13345.001.patch,
HADOOP-13449-HADOOP-13345.002.patch, HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch,
HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, HADOOP-13449-HADOOP-13345.007.patch,
HADOOP-13449-HADOOP-13345.008.patch, HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message