hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manoj Govindassamy (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11790) Decommissioning of a DataNode after MaintenanceState takes a very long time to complete
Date Wed, 10 May 2017 21:45:04 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Manoj Govindassamy updated HDFS-11790:
--------------------------------------
    Attachment: HDFS-11790-test.01.patch

Attached the test patch to show the issue with BlockManager#computeReconstructionWorkForBlocks
when decommissioning a datanode which already entered maintenance state.

{noformat}
2017-05-10 14:39:45,315 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741828_1004 to datanode(s) 127.0.0.1:59403
2017-05-10 14:39:45,315 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741842_1018 to datanode(s) 127.0.0.1:59412
2017-05-10 14:39:45,315 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741844_1020 to datanode(s) 127.0.0.1:59407
2017-05-10 14:39:46,319 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741825_1001 to datanode(s) 127.0.0.1:59407
2017-05-10 14:39:46,319 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741829_1005 to datanode(s) 127.0.0.1:59412
2017-05-10 14:39:46,319 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741833_1009 to datanode(s) 127.0.0.1:59407
2017-05-10 14:39:47,321 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741845_1021 to datanode(s) 127.0.0.1:59407
2017-05-10 14:39:48,328 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741843_1019 to datanode(s) 127.0.0.1:59403
2017-05-10 14:39:54,357 [RedundancyMonitor] DEBUG BlockStateChange (BlockManager.java:computeReconstructionWorkForBlocks(1782))
- BLOCK* ask [127.0.0.1:59416] to replicate blk_1073741834_1010 to datanode(s) 127.0.0.1:59412

2017-05-10 14:40:24,279 [DecommissionMonitor-0] DEBUG blockmanagement.DecommissionManager
(DecommissionManager.java:check(577)) - Node 127.0.0.1:59416 still has 9 blocks to replicate
before it is a candidate to finish Decommission In Progress.
{noformat}


> Decommissioning of a DataNode after MaintenanceState takes a very long time to complete
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-11790
>                 URL: https://issues.apache.org/jira/browse/HDFS-11790
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Manoj Govindassamy
>            Assignee: Manoj Govindassamy
>         Attachments: HDFS-11790-test.01.patch
>
>
> *Problem:*
> When a DataNode is requested for Decommissioning after it successfully transitioned to
MaintenanceState (HDFS-7877), the decommissioning state transition is stuck for a long time
even for very small number of blocks in the cluster. 
> *Details:*
> * A DataNode DN1 wa requested for MaintenanceState and it successfully transitioned from
ENTERING_MAINTENANCE state IN_MAINTENANCE state as there are sufficient replication for all
its blocks.
> * As DN1 was in maintenance state now, the DataNode process was stopped on DN1. Later
the same DN1 was requested for Decommissioning. 
> * As part of Decommissioning, all the blocks residing in DN1 were requested for re-replicated
to other DataNodes, so that DN1 could transition from ENTERING_DECOMMISSION to DECOMMISSIONED.

> * But, re-replication for few blocks was stuck for a long time. Eventually it got completed.
> * Digging the code and logs, found that the IN_MAINTENANCE DN1 was chosen as a source
datanode for re-replication of few of the blocks. Since DataNode process on DN1 was already
stopped, the re-replication was stuck for a long time.
> * Eventually PendingReplicationMonitor timed out, and those re-replication were re-scheduled
for those timed out blocks. Again, during the re-replication also, the IN_MAINT DN1 was chose
as a source datanode for few of the blocks leading to timeout again. This iteration continued
for few times until all blocks get re-replicated.
> * By design, IN_MAINT datandoes should not be chosen for any read or write operations.
 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message