hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3318) Hftp hangs on transfers >2GB
Date Wed, 25 Apr 2012 19:20:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13261970#comment-13261970
] 

Daryn Sharp commented on HDFS-3318:
-----------------------------------

bq. Why does the filelength now begin at startPos?

It's another bug related to successfully reading the stream that I didn't fully fix, but fixed
"enough".  When EOF is encountered, it checks {noformat}if (currentPos < filelength) {
EOFException } {noformat} to decide if there was a premature EOF.  {{currentPos}} and {{filelength}}
are *not relative* to {{startPos}}, thus it's not valid to compare the current pos to the
stream length.

Ex. I have 128 bytes.  I seek 100 bytes into it.  The remaining content-length is 28.  My
file length is not 28 bytes!  I read more 10 bytes and the connection unexpectedly closes.
 The broken premature EOF condition fails to detect the fault because (110 < 28) is false.
 The correct check is (110 < 100+28).

{noformat}
      filelength
------------------------
       ^----------------
startPos  content-length
{noformat}

I can file a separate jira for this 1-line fix if you'd like.
                
> Hftp hangs on transfers >2GB
> ----------------------------
>
>                 Key: HDFS-3318
>                 URL: https://issues.apache.org/jira/browse/HDFS-3318
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.24.0, 0.23.3, 2.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Blocker
>         Attachments: HDFS-3318-1.patch, HDFS-3318.patch
>
>
> Hftp transfers >2GB hang after the transfer is complete.  The problem appears to be
caused by java internally using an int for the content length.  When it overflows 2GB, it
won't check the bounds of the reads on the input stream.  The client continues reading after
all data is received, and the client blocks until the server times out the connection -- _many_
minutes later.  In conjunction with hftp timeouts, all transfers >2G fail with a read timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message