hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file
Date Thu, 29 Mar 2012 02:07:26 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240917#comment-13240917
] 

Uma Maheswara Rao G commented on HDFS-3119:
-------------------------------------------

{quote}
addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of numCurrentReplica
or fileReplication may be incorrect.  We should print them out for debugging.
{quote}

Here addStoredBlock did not perform processOverReplicatedBlock because, all DNs reported block
before moving finalizing the fileInodeUnderConstruction. addStoredBlock will just return if
the block is underConstruction stage. After this step there is no other way to process this
processOverReplicatedBlock  again. Only the i am feeling is to checkReplication after finalized
the block. That is checking currently only for neededReplications not for OverReplication.

This is reproducible. But issue will be random because of other scenario. If it meets min
replication, it can finalize fileInodeUnderConstruction and then other addStoredBlocks can
perform processOverReplicatedBlock. So, block can be invalidated. 
                
> Overreplicated block is not deleted even after the replication factor is reduced after
sync follwed by closing that file
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3119
>                 URL: https://issues.apache.org/jira/browse/HDFS-3119
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.24.0
>            Reporter: J.Andreina
>            Assignee: Brandon Li
>            Priority: Minor
>             Fix For: 0.24.0, 0.23.2
>
>
> cluster setup:
> --------------
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed) 
> step2: change the replication factor to 1  using the command: "./hdfs dfs -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for /filewrite.txt" , logs
has occured but the overreplicated blocks are not deleted even after the block report is sent
from DN
> * while listing the file in the console using "./hdfs dfs -ls " the replication factor
for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message