hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1200) Datanode should periodically do a disk check
Date Fri, 04 May 2007 22:36:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12493818

Hadoop QA commented on HADOOP-1200:


http://issues.apache.org/jira/secure/attachment/12356806/diskCheck1.patch applied and successfully
tested against trunk revision r534975.

Test results:   http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/119/testReport/
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/119/console

> Datanode should periodically do a disk check
> --------------------------------------------
>                 Key: HADOOP-1200
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1200
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.12.2
>            Reporter: Hairong Kuang
>         Assigned To: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.13.0
>         Attachments: diskCheck.patch, diskCheck1.patch
> HADOOP-1170 removed the disk checking feature. But this is a needed feature for maintaining
a large cluster. I agree that checking the disk on every I/O is too costly. A nicer approach
is to have a thread that periodically do a disk check. It then automatically decommissions
itself when any error occurs.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message