hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Elek, Marton (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDDS-894) Content-length should be set for ozone s3 ranged download
Date Mon, 03 Dec 2018 15:08:01 GMT

     [ https://issues.apache.org/jira/browse/HDDS-894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Elek, Marton updated HDDS-894:
------------------------------
    Status: Patch Available  (was: In Progress)

> Content-length should be set for ozone s3 ranged download
> ---------------------------------------------------------
>
>                 Key: HDDS-894
>                 URL: https://issues.apache.org/jira/browse/HDDS-894
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: S3
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>         Attachments: HDDS-894.001.patch
>
>
> Some of the seek related s3a unit tests are failed when using ozone s3g as the destination
endpoint.
> For example ITestS3ContractSeek.testRandomSeeks is failing with:
> {code}
> org.apache.hadoop.fs.s3a.AWSClientIOException: read on s3a://buckettest/test/testrandomseeks.bin:
com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9411;
expectedLength=0; includeSkipped=true; in.getClass()=class com.amazonaws.services.s3.AmazonS3Client$2;
markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0: Data
read has a different length than the expected: dataLength=9411; expectedLength=0; includeSkipped=true;
in.getClass()=class com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0;
resetSinceLastMarked=false; markCount=0; resetCount=0
> 	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:189)
> 	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
> 	at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
> 	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
> 	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
> 	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
> 	at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:446)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:195)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:169)
> 	at org.apache.hadoop.fs.contract.ContractTestUtils.verifyRead(ContractTestUtils.java:256)
> 	at org.apache.hadoop.fs.contract.AbstractContractSeekTest.testRandomSeeks(AbstractContractSeekTest.java:357)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.lang.Thread.run(Thread.java:745)
> {code}
> With checking the requests/responses with mitm proxy I found that it works well under
a given range length
> But if the response would be bigger than a specific size the response is chunked by the
jetty server, which could be the problem. 
> Response for the problematic request:
> {code}
> Request                         Response                        Detail
> Date:                    Mon, 03 Dec 2018 11:27:55 GMT                              
       
> Cache-Control:           no-cache                                                   
       
> Expires:                 Mon, 03 Dec 2018 11:27:55 GMT                              
       
> Date:                    Mon, 03 Dec 2018 11:27:55 GMT                              
       
> Pragma:                  no-cache                                                   
       
> X-Content-Type-Options:  nosniff                                                    
       
> X-FRAME-OPTIONS:         SAMEORIGIN                                                 
       
> X-XSS-Protection:        1; mode=block                                              
       
> Content-Range:           bytes 208-10239/10240                                      
       
> Accept-Ranges:           bytes                                                      
       
> Content-Type:            application/octet-stream                                   
       
> Last-Modified:           Mon, 03 Dec 2018 11:27:54 GMT                              
       
> Server:                  Ozone                                                      
       
> x-amz-id-2:              gk2CRdkmri0mc1                                             
       
> x-amz-request-id:        eb60ee7f-55df-4439-b22a-7d92076f6eee                       
       
> Transfer-Encoding:       chunked 
> {code}
> As you can see the Content-Length is missing and the Transfer-Enconding is missing.
> Based on [this|https://www.eclipse.org/lists/jetty-users/msg03053.html] comment the solution
is to explicit add the Content-Length to the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message