hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Boudnik (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-811) Add metrics, failure reporting and additional tests for HDFS-457
Date Wed, 09 Jun 2010 23:56:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12877268#action_12877268

Konstantin Boudnik commented on HDFS-811:

+1 on Jacob point. It isn't about how obvious the assertion is. It is about time one needs
to spend on a failure analysis. If there's no message one needs to pull up the source code,
find the line, read the context, etc. With meaningful diagnostics message it is usually enough
to just glance at the log file.

> Add metrics, failure reporting and additional tests for HDFS-457
> ----------------------------------------------------------------
>                 Key: HDFS-811
>                 URL: https://issues.apache.org/jira/browse/HDFS-811
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: test
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Ravi Phulari
>            Assignee: Eli Collins
>            Priority: Minor
>             Fix For: 0.21.0, 0.22.0
>         Attachments: hdfs-811-1.patch, hdfs-811-2.patch, hdfs-811-3.patch, hdfs-811-4.patch
>  HDFS-457 introduced a improvement which allows  datanode to continue if a volume for
replica storage fails. Previously a datanode resigned if any volume failed. 
> Description of HDFS-457
> {quote}
> Current implementation shuts DataNode down completely when one of the configured volumes
of the storage fails.
> This is rather wasteful behavior because it decreases utilization (good storage becomes
unavailable) and imposes extra load on the system (replication of the blocks from the good
volumes). These problems will become even more prominent when we move to mixed (heterogeneous)
clusters with many more volumes per Data Node.
> {quote}
> I suggest following additional tests for this improvement. 
> #1 Test successive  volume failures ( Minimum 4 volumes )
> #2 Test if each volume failure reports reduction in available DFS space and remaining
> #3 Test if failure of all volumes on a data nodes leads to the data node failure.
> #4 Test if correcting failed storage disk brings updates and increments available DFS

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message