hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ming Ma (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-7208) NN doesn't schedule replication when a DN storage fails
Date Mon, 13 Oct 2014 23:08:33 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ming Ma updated HDFS-7208:
--------------------------
    Attachment: HDFS-7208.patch

Here is the initial patch based on heartbeat notification approach, the assumption is DN will
report all healthy storages in the heartbeat. This approach is simpler than the blockReport
approach which needs to have DN persist the info to cover some failure scenarios. It also
makes storage failure detection faster.

1. NN detects failed storages during HB processing based on the delta between DN's reported
healthy storages and the storages NN has. Marked the state of those missing storages DatanodeStorage.State.FAILED.

2. HeartbeatManager will remove blocks on those DatanodeStorage.State.FAILED storages. This
will cover some corner scenarios where new replicas might be added to BlocksMap afterwards.

3. It also covers the case where admins reduce the number of healthy volumes on DN and restart
DN.

> NN doesn't schedule replication when a DN storage fails
> -------------------------------------------------------
>
>                 Key: HDFS-7208
>                 URL: https://issues.apache.org/jira/browse/HDFS-7208
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Ming Ma
>         Attachments: HDFS-7208.patch
>
>
> We found the following problem. When a storage device on a DN fails, NN continues to
believe replicas of those blocks on that storage are valid and doesn't schedule replication.
> A DN has 12 storage disks. So there is one blockReport for each storage. When a disk
fails, # of blockReport from that DN is reduced from 12 to 11. Given dfs.datanode.failed.volumes.tolerated
is configured to be > 0, NN still considers that DN healthy.
> 1. A disk failed. All blocks of that disk are removed from DN dataset.
>  
> {noformat}
> 2014-10-04 02:11:12,626 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Removing replica BP-1748500278-xx.xx.xx.xxx-1377803467793:1121568886 on failed volume /data/disk6/dfs/current
> {noformat}
> 2. NN receives DatanodeProtocol.DISK_ERROR. But that isn't enough to have NN remove the
DN and the replicas from the BlocksMap. In addition, blockReport doesn't provide the diff
given that is done per storage.
> {noformat}
> 2014-10-04 02:11:12,681 WARN org.apache.hadoop.hdfs.server.namenode.NameNode: Disk error
on DatanodeRegistration(xx.xx.xx.xxx, datanodeUuid=f3b8a30b-e715-40d6-8348-3c766f9ba9ab, infoPort=50075,
ipcPort=50020, storageInfo=lv=-55;cid=CID-e3c38355-fde5-4e3a-b7ce-edacebdfa7a1;nsid=420527250;c=1410283484939):
DataNode failed volumes:/data/disk6/dfs/current
> {noformat}
> 3. Run fsck on the file and confirm the NN's BlocksMap still has that replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message