hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Boris Shkolnik (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-457) better handling of volume failure in Data Node storage
Date Tue, 30 Jun 2009 19:02:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725747#action_12725747

Boris Shkolnik commented on HDFS-457:

     We should try to keep DataNode alive even if one of its volumes is not accessible. When
DataNode handles the error we can check to see what percentage of the volumes is down. If
it more then some predefined threshold we should shut the node down. But if it is not - we
can keep it alive. In this case we need to do the following:
          o remove the volume from the list of valid volumes
          o go over all the blocks and remove those that reside on this volume
          o immediately schedule a block report to update namenode and start replication.
          o Optional. We can try to monitor removed volumes (or periodically compare valid
ones against the configured ones), and if some of them comes back to live (or on an operator
command) we may try to restore it. (I don't know if it is possible and plausible, but it can
be designed/done as the next step). 

Affected classes and methods:

    * BlockReceiver constructor
    * BlockReceiver run()
    * BlockReceiver lastDataNodeRun()
    * FSDataset, FSVolume, FSVolumeSet, FSDir 

> better handling of volume failure in Data Node storage
> ------------------------------------------------------
>                 Key: HDFS-457
>                 URL: https://issues.apache.org/jira/browse/HDFS-457
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>            Reporter: Boris Shkolnik
>            Assignee: Boris Shkolnik
> Current implementation shuts DataNode down completely when one of the configured volumes
of the storage fails.
> This is rather wasteful behavior because it  decreases utilization (good storage becomes
unavailable) and imposes extra load on the system (replication of the blocks from the good
volumes). These problems will become even more prominent when we move to mixed (heterogeneous)
clusters with many more volumes per Data Node.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message