hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-15625) S3A input stream to use etags/version number to detect changed source files
Date Wed, 13 Mar 2019 18:15:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791968#comment-16791968
] 

Steve Loughran commented on HADOOP-15625:
-----------------------------------------

Those test failures. None of these are in a codepath related to this patch. 

{code}
ERROR] Tests run: 43, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 80.541 s <<<
FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract
[ERROR] testRenameDirToSelf(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  Time elapsed:
1.773 s  <<< ERROR!
java.lang.NullPointerException
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.lambda$listChildren$4(DynamoDBMetadataStore.java:653)
	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
	at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.listChildren(DynamoDBMetadataStore.java:625)
	at org.apache.hadoop.fs.s3a.s3guard.DescendantsIterator.next(DescendantsIterator.java:132)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.deleteSubtree(DynamoDBMetadataStore.java:515)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:2042)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1951)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.cleanupDir(FileSystemContractBaseTest.java:92)
	at org.apache.hadoop.fs.FileSystemContractBaseTest.tearDown(FileSystemContractBaseTest.java:85)
	at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:745)

[INFO] Running org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.479 s - in org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.021 s - in org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation
[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.613 s - in org.apache.hadoop.fs.s3a.select.ITestS3SelectCLI
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.127 s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.562 s - in org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List
[INFO] Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.611 s - in org.apache.hadoop.fs.s3a.select.ITestS3Select
[ERROR] Tests run: 16, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 233.061 s <<<
FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] testDynamoDBInitDestroyCycle(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
 Time elapsed: 18.407 s  <<< ERROR!
java.lang.IllegalArgumentException: Read capacity must have value greater than or equal to
1.
	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$SetCapacity.run(S3GuardTool.java:564)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:79)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:51)
	at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle(ITestS3GuardToolDynamoDB.java:282)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:745)

[ERROR] testBucketInfoUnguarded(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
 Time elapsed: 1.411 s  <<< ERROR!
java.io.FileNotFoundException: DynamoDB table 'testBucketInfoUnguarded-212fd4bb-f8bd-430b-8fea-739a53c131c8'
does not exist in region eu-west-1; auto-creation is turned off
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1228)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:359)
	at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:394)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3324)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3373)
	at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3347)
	at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:544)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1140)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:79)
	at org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:51)
	at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testBucketInfoUnguarded(AbstractS3GuardToolTestBase.java:341)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested resource
not found: Table: testBucketInfoUnguarded-212fd4bb-f8bd-430b-8fea-739a53c131c8 not found (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: PGLDQJH8VD6R0T4N6T83JS2R3VVV4KQNSO5AEMVJF66Q9ASUAAJG)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:3443)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:3419)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1660)
	at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1635)
	at com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
	at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1186)
	... 27 more
{code}


> S3A input stream to use etags/version number to detect changed source files
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-15625
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15625
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.0
>            Reporter: Brahma Reddy Battula
>            Assignee: Ben Roling
>            Priority: Major
>         Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, HADOOP-15625-002.patch,
HADOOP-15625-003.patch, HADOOP-15625-004.patch, HADOOP-15625-005.patch, HADOOP-15625-006.patch,
HADOOP-15625-007.patch, HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch,
HADOOP-15625-011.patch, HADOOP-15625-012.patch, HADOOP-15625-013-delta.patch, HADOOP-15625-013.patch,
HADOOP-15625-014.patch, HADOOP-15625-015.patch, HADOOP-15625-015.patch, HADOOP-15625-016.patch,
HADOOP-15625-017.patch
>
>
> S3A input stream doesn't handle changing source files any better than the other cloud
store connectors. Specifically: it doesn't noticed it has changed, caches the length from
startup, and whenever a seek triggers a new GET, you may get one of: old data, new data, and
even perhaps go from new data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with S3Guard,
BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message