hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Created: (HDFS-694) Add a test to make sure that node decomission does not get blocked by underreplicated blocks in an unclosed file
Date Mon, 12 Oct 2009 17:14:31 GMT
Add a test to make sure that node decomission does not get blocked by underreplicated blocks
in an unclosed file
----------------------------------------------------------------------------------------------------------------

                 Key: HDFS-694
                 URL: https://issues.apache.org/jira/browse/HDFS-694
             Project: Hadoop HDFS
          Issue Type: Test
    Affects Versions: 0.21.0
            Reporter: Hairong Kuang
             Fix For: 0.21.0


We have a cluster that took  much longer time to decommission datanodes than usual. It turned
out to be caused by an open file. In HDFS 0.20 when a file is open, NN does not schedule any
block belonged to this file to replicate. So if an open file has an under-replicated block
with a replica on a decommission datanode, the decommission won't complete until the file
is closed.

The new append implementation changed the replication strategy that any complete block in
an unclosed file will be scheduled to be replicated if it becomes under-replicated. This jira
aims to add a test to make sure that node decommission does not get blocked by under-replicated
blocks in an open file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message