hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-140) When a file is deleted, its blocks remain in the blocksmap till the next block report from Datanode
Date Fri, 23 Dec 2011 08:56:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175338#comment-13175338
] 

Uma Maheswara Rao G commented on HDFS-140:
------------------------------------------

Hi Dhruba,
  I am not very much insisting to push this. But when i am going through some HA issue HDFS-1972,
I have seen one of the comment where you have mentioned one link
{quote}
 chooseExcessReplicates() does not really need the FSNamesystem lock. We have done this just
to increase scalability: http://bit.ly/rUDVui
{quote}
 While going through that file(FSNameSystem) i found, removePathAndBlocks is removing block
from blocksMap. 

{code}
}          blocksMap.removeBlock(b);    }
{code}

This patch also does same here. Since you have tested with very big clusters, you can review
the change once and comment?

Thanks
Uma
 

 
 
                
> When a file is deleted, its blocks remain in the blocksmap till the next block report
from Datanode
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-140
>                 URL: https://issues.apache.org/jira/browse/HDFS-140
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>            Reporter: dhruba borthakur
>            Assignee: Uma Maheswara Rao G
>         Attachments: HDFS-140.20security205.patch
>
>
> When a file is deleted, the namenode sends out block deletions messages to the appropriate
datanodes. However, the namenode does not delete these blocks from the blocksmap. Instead,
the processing of the next block report from the datanode causes these blocks to get removed
from the blocksmap.
> If we desire to make block report processing less frequent, this issue needs to be addressed.
Also, this introduces indeterministic behaviout to a a few unit tests. Another factor to consider
is to ensure that duplicate block detection is not compromised.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message