hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 刘喆 (JIRA) <j...@apache.org>
Subject [jira] [Commented] (HDFS-7208) NN doesn't schedule replication when a DN storage fails
Date Wed, 25 Nov 2015 07:18:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026339#comment-15026339
] 

刘喆 commented on HDFS-7208:
--------------------------

We meet the same problem, but we have a very simple path that works.  We can treat it as the
datanode deleted the related blocks, so we only need one line to fix it.


diff --git a/hadoop/adh/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
b/hadoop/adh/src/hadoop-hdfs-proje
index 3320c65..7a10072 100644
--- a/hadoop/adh/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ b/hadoop/adh/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1332,6 +1332,8 @@ public void checkDataDir() throws DiskErrorException {
                   + " on failed volume " + fv.getCurrentDir().getAbsolutePath());
               ib.remove();
               removedBlocks++;
+              datanode.notifyNamenodeDeletedBlock(new ExtendedBlock(bpid, b.getBlockId()),
b.getStorageUuid());
             }
           }
         }

> NN doesn't schedule replication when a DN storage fails
> -------------------------------------------------------
>
>                 Key: HDFS-7208
>                 URL: https://issues.apache.org/jira/browse/HDFS-7208
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Ming Ma
>            Assignee: Ming Ma
>             Fix For: 2.6.0
>
>         Attachments: HDFS-7208-2.patch, HDFS-7208-3.patch, HDFS-7208.patch
>
>
> We found the following problem. When a storage device on a DN fails, NN continues to
believe replicas of those blocks on that storage are valid and doesn't schedule replication.
> A DN has 12 storage disks. So there is one blockReport for each storage. When a disk
fails, # of blockReport from that DN is reduced from 12 to 11. Given dfs.datanode.failed.volumes.tolerated
is configured to be > 0, NN still considers that DN healthy.
> 1. A disk failed. All blocks of that disk are removed from DN dataset.
>  
> {noformat}
> 2014-10-04 02:11:12,626 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Removing replica BP-1748500278-xx.xx.xx.xxx-1377803467793:1121568886 on failed volume /data/disk6/dfs/current
> {noformat}
> 2. NN receives DatanodeProtocol.DISK_ERROR. But that isn't enough to have NN remove the
DN and the replicas from the BlocksMap. In addition, blockReport doesn't provide the diff
given that is done per storage.
> {noformat}
> 2014-10-04 02:11:12,681 WARN org.apache.hadoop.hdfs.server.namenode.NameNode: Disk error
on DatanodeRegistration(xx.xx.xx.xxx, datanodeUuid=f3b8a30b-e715-40d6-8348-3c766f9ba9ab, infoPort=50075,
ipcPort=50020, storageInfo=lv=-55;cid=CID-e3c38355-fde5-4e3a-b7ce-edacebdfa7a1;nsid=420527250;c=1410283484939):
DataNode failed volumes:/data/disk6/dfs/current
> {noformat}
> 3. Run fsck on the file and confirm the NN's BlocksMap still has that replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message