hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3318) Hftp hangs on transfers >2GB
Date Wed, 25 Apr 2012 14:36:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13261677#comment-13261677
] 

Daryn Sharp commented on HDFS-3318:
-----------------------------------

I doubt the per-read buffer is going to be >2GB for at least 5-10 years.  By that time,
I think java will have fixed the issue. :)
                
> Hftp hangs on transfers >2GB
> ----------------------------
>
>                 Key: HDFS-3318
>                 URL: https://issues.apache.org/jira/browse/HDFS-3318
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.24.0, 0.23.3, 2.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Blocker
>         Attachments: HDFS-3318.patch
>
>
> Hftp transfers >2GB hang after the transfer is complete.  The problem appears to be
caused by java internally using an int for the content length.  When it overflows 2GB, it
won't check the bounds of the reads on the input stream.  The client continues reading after
all data is received, and the client blocks until the server times out the connection -- _many_
minutes later.  In conjunction with hftp timeouts, all transfers >2G fail with a read timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message