hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9819) FsVolume should tolerate few times check-dir failed due to deletion by mistake
Date Wed, 17 Feb 2016 19:02:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15150978#comment-15150978
] 

Kihwal Lee commented on HDFS-9819:
----------------------------------

bq. ... do a delete dir/file operation by mistake in datanode data-dirs...
If you are talking about making DN tolerant about accidental data dir deletions, beside being
unacceptable, it sounds very strange.  I assume you have a specific scenario in mind. Please
elaborate your use case.


> FsVolume should tolerate few times check-dir failed due to deletion by mistake
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-9819
>                 URL: https://issues.apache.org/jira/browse/HDFS-9819
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Lin Yiqun
>            Assignee: Lin Yiqun
>             Fix For: 2.7.1
>
>         Attachments: HDFS-9819.001.patch
>
>
> FsVolume should tolerate few times check-dir failed because sometimes we will do a delete
dir/file operation by mistake in datanode data-dirs. Then the {{DataNode#startCheckDiskErrorThread}}
will invoking checkDir method periodicity and find dir not existed, throw exception. The checked
volume will be added to failed volume list. The blocks on this volume will be replicated again.
But actually, this is not needed to do. We should let volume can be tolerated few times check-dir
failed like config {{dfs.datanode.failed.volumes.tolerated}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message