hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5133) FSNameSystem#addStoredBlock does not handle inconsistent block length correctly
Date Fri, 06 Feb 2009 20:06:59 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671287#action_12671287
] 

Hairong Kuang commented on HADOOP-5133:
---------------------------------------

Summary of an offline discussion:
1. No block locations should be added to blocksMap for an incomplete block. So HADOOP-5134
should be fixed;
2. The length of the previous block should set to be the default block length when client
calls addBlock asking for an additional block for the file;
3. When receiving blockReceived from DN, NameNode checks the length of the new replica:
    If the new replica's length is greater than the default block length or smaller than the
current block length, mark the new replica as corrupt;
    If the new replica's length is greater than the current block length, set the block's
length to be the new replica's length and mark the existing replicas of the block as corrupt.

I believe that most of logic for 3 has already in 0.18.3 branch.

> FSNameSystem#addStoredBlock does not handle inconsistent block length correctly
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-5133
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5133
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.2
>            Reporter: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.19.1
>
>
> Currently NameNode treats either the new replica or existing replicas as corrupt if the
new replica's length is inconsistent with NN recorded block length. The correct behavior should
be
> 1. For a block that is not under construction, the new replica should be marked as corrupt
if its length is inconsistent (no matter shorter or longer) with the NN recorded block length;
> 2. For an under construction block, if the new replica's length is shorter than the NN
recorded block length, the new replica could be marked as corrupt; if the new replica's length
is longer, NN should update its recorded block length. But it should not mark existing replicas
as corrupt. This is because NN recorded length for an under construction block does not accurately
match the block length on datanode disk. NN should not judge an under construction replica
to be corrupt by looking at the inaccurate information:  its recorded block length.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message