hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "nkeywal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3701) HDFS may miss the final block when reading a file opened for writing if one of the datanode is dead
Date Mon, 27 Aug 2012 15:40:09 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13442480#comment-13442480
] 

nkeywal commented on HDFS-3701:
-------------------------------

Hi Uma,

I've done a few changes, it seems to work. HBase tests are ok with this new HDFS version,
HDFS tests are in progress locally but seems to work ok as well. HDFS-3701.ontopof.v1.patch
contains my changes only.


                
> HDFS may miss the final block when reading a file opened for writing if one of the datanode
is dead
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3701
>                 URL: https://issues.apache.org/jira/browse/HDFS-3701
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3
>            Reporter: nkeywal
>            Priority: Critical
>         Attachments: HDFS-3701.patch
>
>
> When the file is opened for writing, the DFSClient calls one of the datanode owning the
last block to get its size. If this datanode is dead, the socket exception is shallowed and
the size of this last block is equals to zero. This seems to be fixed on trunk, but I didn't
find a related Jira. On 1.0.3, it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message