hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bharat Viswanadham (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint
Date Wed, 24 Oct 2018 19:53:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Bharat Viswanadham updated HDDS-693:
------------------------------------
          Resolution: Fixed
    Target Version/s: 0.3.0, 0.4.0  (was: 0.3.0)
              Status: Resolved  (was: Patch Available)

> Support multi-chunk signatures in s3g PUT object endpoint
> ---------------------------------------------------------
>
>                 Key: HDDS-693
>                 URL: https://issues.apache.org/jira/browse/HDDS-693
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: S3
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>         Attachments: HDDS-693.001.patch, HDDS-693.002.patch
>
>
> I tried to execute s3a unit tests with our s3 gateway and in ITestS3AContractMkdir.testMkDirRmRfDir
I got the following error: 
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for path 's3a://buckettest/test'
since it is a file.
> 	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
> 	at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
> 	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
> 	at org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> Checking the created key I found that the size is not zero (it's a directory entry) but
86. Checking the content of the key I can see:
> {code}
>  cat /tmp/qwe2
> 0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
> {code}
> The reason is that it's uploaded with multi-chunk signature.
> In case of the header x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, the body
is special: Multiple signed chunks are following each other with additional signature lines.
> See the documentation for more details:
> https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
> In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message