hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lantao Jin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11285) Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never transition to (Dead, DECOMMISSIONED)
Date Tue, 10 Jan 2017 08:48:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814361#comment-15814361
] 

Lantao Jin commented on HDFS-11285:
-----------------------------------

Yes, you are right. The block is open for write. [~andrew.wang].
{code}
2017-01-10 01:43:09,906 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager:
Processing decommission-in-progress node 10.103.58.19:50010
2017-01-10 01:43:09,906 TRACE org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager:
Block blk_4280405944_1106180180519{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-da139e63-aa77-4533-96a7-ba686d6b067d:NORMAL:10.103.58.30:50010|RBW],
ReplicaUC[[DISK]DS-0b38c81f-1e3a-4e9b-acfb-97457c8ed6de:NORMAL:10.103.58.19:50010|RBW], ReplicaUC[[DISK]DS-be50263a-69d5-4efe-a70a-56585022a403:NORMAL:10.142.126.52:50010|RBW]]}
numExpected=3, numLive=0
2017-01-10 01:43:09,906 TRACE org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager:
UC block blk_4280405944_1106180180519{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-da139e63-aa77-4533-96a7-ba686d6b067d:NORMAL:10.103.58.30:50010|RBW],
ReplicaUC[[DISK]DS-0b38c81f-1e3a-4e9b-acfb-97457c8ed6de:NORMAL:10.103.58.19:50010|RBW], ReplicaUC[[DISK]DS-be50263a-69d5-4efe-a70a-56585022a403:NORMAL:10.142.126.52:50010|RBW]]}
insufficiently-replicated since numLive (0) < minR (1)
2017-01-10 01:43:09,906 INFO org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager:
Block: blk_4280405944_1106180180519{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-da139e63-aa77-4533-96a7-ba686d6b067d:NORMAL:10.103.58.30:50010|RBW],
ReplicaUC[[DISK]DS-0b38c81f-1e3a-4e9b-acfb-97457c8ed6de:NORMAL:10.103.58.19:50010|RBW], ReplicaUC[[DISK]DS-be50263a-69d5-4efe-a70a-56585022a403:NORMAL:10.142.126.52:50010|RBW]]},
Expected Replicas: 3, live replicas: 0, corrupt replicas: 0, decommissioned replicas: 0, decommissioning
replicas: 1, excess replicas: 0, Is Open File: true, Datanodes having this block: 10.103.58.19:50010
, Current Datanode: 10.103.58.19:50010, Is current datanode decommissioning: true
2017-01-10 01:43:09,906 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager:
Node 10.103.58.19:50010 still has 1 blocks to replicate before it is a candidate to finish
decommissioning.
{code}

But why the block is under_construction within so long time?

> Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never transition
to (Dead, DECOMMISSIONED)
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11285
>                 URL: https://issues.apache.org/jira/browse/HDFS-11285
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Lantao Jin
>         Attachments: DecomStatus.png
>
>
> We have seen the use case of decommissioning DataNodes that are already dead or unresponsive,
and not expected to rejoin the cluster. In a large cluster, we met more than 100 nodes were
dead, decommissioning and their {panel} Under replicated blocks {panel} {panel} Blocks with
no live replicas {panel} were all ZERO. Actually It has been fixed in [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374].
After that, we can refreshNode twice to eliminate this case. But, seems this patch missed
after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We are using a
Hadoop version based 2.7.1 and only below operations can transition the status from {panel}
Dead, DECOMMISSION_INPROGRESS {panel} to {panel} Dead, DECOMMISSIONED {panel}:
> # Retire it from hdfs-exclude
> # refreshNodes
> # Re-add it to hdfs-exclude
> # refreshNodes
> So, why the code removed after refactor in the new DecommissionManager?
> {code:java}
> if (!node.isAlive) {
>   LOG.info("Dead node " + node + " is decommissioned immediately.");
>   node.setDecommissioned();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message