hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "nkeywal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3701) HDFS may miss the final block when reading a file opened for writing if one of the datanode is dead
Date Thu, 09 Aug 2012 15:22:19 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13431898#comment-13431898
] 

nkeywal commented on HDFS-3701:
-------------------------------

Thanks for the quick answer, Uma. It's not a matter of days for us, so we can wait for you.
And short term I will propose something for HDFS-3705 as the workaround I have for it in HBase
is a little bit too much of a workaround :-).

                
> HDFS may miss the final block when reading a file opened for writing if one of the datanode
is dead
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3701
>                 URL: https://issues.apache.org/jira/browse/HDFS-3701
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3
>            Reporter: nkeywal
>            Priority: Critical
>
> When the file is opened for writing, the DFSClient calls one of the datanode owning the
last block to get its size. If this datanode is dead, the socket exception is shallowed and
the size of this last block is equals to zero. This seems to be fixed on trunk, but I didn't
find a related Jira. On 1.0.3, it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message