hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lukas Majercak (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-11499) Decommissioning stuck because of failing recovery
Date Wed, 08 Mar 2017 01:38:38 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15900519#comment-15900519
] 

Lukas Majercak edited comment on HDFS-11499 at 3/8/17 1:37 AM:
---------------------------------------------------------------

Looks like the issue is in the configuration, I have been running this test on 2.7.1 with
no problems, and just found that trunk is missing some configurations, specifically :
{code:xml}
conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY, 4).
{code}

The test times out because of a block being in PendingReconstructionBlocks. 


was (Author: lukmajercak):
Looks like the issue is in the configuration, I have been running this test on 2.7.1 with
no problems, and just found that trunk is missing some configurations, specifically :
conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY, 4). 

The test times out because of a block being in PendingReconstructionBlocks. 

> Decommissioning stuck because of failing recovery
> -------------------------------------------------
>
>                 Key: HDFS-11499
>                 URL: https://issues.apache.org/jira/browse/HDFS-11499
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs, namenode
>    Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha2
>            Reporter: Lukas Majercak
>            Assignee: Lukas Majercak
>              Labels: blockmanagement, decommission, recovery
>             Fix For: 3.0.0-alpha3
>
>         Attachments: HDFS-11499.02.patch, HDFS-11499.03.patch, HDFS-11499.04.patch, HDFS-11499.patch
>
>
> Block recovery will fail to finalize the file if the locations of the last, incomplete
block are being decommissioned. Vice versa, the decommissioning will be stuck, waiting for
the last block to be completed.
> {code:xml}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalStateException): Failed to finalize
INodeFile testRecoveryFile since blocks[255] is non-complete, where blocks=[blk_1073741825_1001,
blk_1073741826_1002...
> {code}
> The fix is to count replicas on decommissioning nodes when completing last block in BlockManager.commitOrCompleteLastBlock,
as we know that the DecommissionManager will not decommission a node that has UC blocks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message