hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manoj Govindassamy (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete
Date Wed, 10 May 2017 01:13:04 GMT
Manoj Govindassamy created HDFS-11790:
-----------------------------------------

             Summary: Decommissioning of a DataNode after MaintenanceState takes a very long
time to complete
                 Key: HDFS-11790
                 URL: https://issues.apache.org/jira/browse/HDFS-11790
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs
    Affects Versions: 3.0.0-alpha1
            Reporter: Manoj Govindassamy
            Assignee: Manoj Govindassamy


Problem:
When a DataNode is requested for Decommissioning after it successfully transitioned to MaintenanceState
(HDFS-7877), the decommissioning state transition is stuck for a long time even for very small
number of blocks in the cluster. 

Details:
* A DataNode DN1 wa requested for MaintenanceState and it successfully transitioned from ENTERING_MAINTENANCE
state IN_MAINTENANCE state as there are sufficient replication for all its blocks.
* As DN1 was in maintenance state now, the DataNode process was stopped on DN1. Later the
same DN1 was requested for Decommissioning. 
* As part of Decommissioning, all the blocks residing in DN1 were requested for re-replicated
to other DataNodes, so that DN1 could transition from ENTERING_DECOMMISSION to DECOMMISSIONED.

* But, re-replication for few blocks was stuck for a long time. Eventually it got completed.
* Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a source datanode
for re-replication of few of the blocks. Since DataNode process on DN1 was already stopped,
the re-replication was stuck for a long time.
* Eventually PendingReplicationMonitor timed out, and those re-replication were re-scheduled
for those timed out blocks. Again, during the re-replication also, the IN_MAINT DN1 was chose
as a source datanode for few of the blocks leading to timeout again. This iteration continued
for few times until all blocks get re-replicated.
* By design, IN_MAINT datandoes should not be chosen for any read or write operations.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message