hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rodrigo Schmidt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1032) Extend DFSck with an option to list corrupt files using API from HDFS-729
Date Wed, 24 Mar 2010 00:34:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12848982#action_12848982
] 

Rodrigo Schmidt commented on HDFS-1032:
---------------------------------------

You made me think about the strength of your final output sentence and then it raised a few
points:

1) When you say "There are x corrupt files", the user thinks these are all corrupt files in
that subtree, which is not accurate since getCorruptFiles() returns a maximum number of files.


2) As you pointed out, when the namenode is on safemode, your output is not accurate.

What do you think of not printing this last summary sentence and limiting the output to the
list of files that may be corrupted?

> Extend DFSck with an option to list corrupt files using API from HDFS-729
> -------------------------------------------------------------------------
>
>                 Key: HDFS-1032
>                 URL: https://issues.apache.org/jira/browse/HDFS-1032
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: tools
>            Reporter: Rodrigo Schmidt
>            Assignee: André Oriani
>         Attachments: hdfs-1032_aoriani.patch, hdfs-1032_aoriani_2.patch, hdfs-1032_aoriani_3.patch
>
>
> HDFS-729 created a new API to namenode that returns the list of corrupt files.
> We can now extend fsck (DFSck.java) to add an option (e.g. --list_corrupt) that queries
the namenode using the new API and lists the corrupt blocks to the users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message