hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-811) Additional tests(Unit tests & Functional tests) for HDFS-457.
Date Wed, 26 May 2010 23:55:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12872026#action_12872026
] 

Eli Collins commented on HDFS-811:
----------------------------------

Thanks for taking a look Ravi. Konst asked that the patch for HDFS-1161 apply against trunk
instead of depending on this one so I moved the test from that jira to this jira, and also
made the tests waits indefinitely for the required failures instead of just a computed amount
of time which Dhruba suggested in HDFS-1161.

> Additional tests(Unit tests & Functional tests)  for HDFS-457.
> --------------------------------------------------------------
>
>                 Key: HDFS-811
>                 URL: https://issues.apache.org/jira/browse/HDFS-811
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: test
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Ravi Phulari
>            Assignee: Eli Collins
>            Priority: Minor
>             Fix For: 0.21.0, 0.22.0
>
>         Attachments: hdfs-811-1.patch, hdfs-811-2.patch
>
>
>  HDFS-457 introduced a improvement which allows  datanode to continue if a volume for
replica storage fails. Previously a datanode resigned if any volume failed. 
> Description of HDFS-457
> {quote}
> Current implementation shuts DataNode down completely when one of the configured volumes
of the storage fails.
> This is rather wasteful behavior because it decreases utilization (good storage becomes
unavailable) and imposes extra load on the system (replication of the blocks from the good
volumes). These problems will become even more prominent when we move to mixed (heterogeneous)
clusters with many more volumes per Data Node.
> {quote}
> I suggest following additional tests for this improvement. 
> #1 Test successive  volume failures ( Minimum 4 volumes )
> #2 Test if each volume failure reports reduction in available DFS space and remaining
space.
> #3 Test if failure of all volumes on a data nodes leads to the data node failure.
> #4 Test if correcting failed storage disk brings updates and increments available DFS
space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message