hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11340) DataNode reconfigure for disks doesn't remove the failed volumes
Date Mon, 06 Mar 2017 20:27:33 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898024#comment-15898024
] 

Chris Nauroth commented on HDFS-11340:
--------------------------------------

[~manojg], thank you for the patch.  This looks good.  I have 2 small requests:

{code}
      // removed when the failure was detected by DataNode#checkDiskErorrAsync.
{code}

Please fix the type of "Error".

{code}
   void addVolumeFailureInfo(VolumeFailureInfo volumeFailureInfo) {
    if (!volumeFailureInfos.containsKey(volumeFailureInfo
        .getFailedStorageLocation())) {
      volumeFailureInfos.put(volumeFailureInfo.getFailedStorageLocation(),
          volumeFailureInfo);
    }
   }
{code}

Please enter a comment explaining why the {{containsKey}} check is necessary, since this was
a point of confusion in earlier code review feedback.  That way, other maintainers reading
the code won't accidentally remove the {{containsKey}} check thinking that it's unnecessary.

> DataNode reconfigure for disks doesn't remove the failed volumes
> ----------------------------------------------------------------
>
>                 Key: HDFS-11340
>                 URL: https://issues.apache.org/jira/browse/HDFS-11340
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Manoj Govindassamy
>            Assignee: Manoj Govindassamy
>         Attachments: HDFS-11340.01.patch, HDFS-11340.02.patch, HDFS-11340.03.patch, HDFS-11340.04.patch
>
>
> Say a DataNode (uuid:xyz) has disks D1 and D2. When D1 turns bad, JMX query on FSDatasetState-xyz
for "NumFailedVolumes" attr rightly shows the failed volume count as 1 and the "FailedStorageLocations"
attr has the failed storage location as "D1".
> It is possible to add or remove disks to this DataNode by running {{reconfigure}} command.
Let the failed disk D1 be removed from the conf and the new conf has only one good disk D2.
Upon running the reconfigure command for this DataNode with this new disk conf, the expectation
is DataNode would no more have "NumFailedVolumes" or "FailedStorageLocations". But, even after
removing the failed disk from the conf and a successful reconfigure, DataNode continues to
show the "NumFailedVolumes" as 1 and "FailedStorageLocations" as "D1" and it never gets reset.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message