hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rodrigo Schmidt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1032) Extend DFSck with an option to list corrupt files using API from HDFS-729
Date Wed, 17 Mar 2010 00:49:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846245#action_12846245

Rodrigo Schmidt commented on HDFS-1032:

It doesn't have to be that complicated, right?

What about the following change to the code you had on your current patch:

String pathdir = path.endsWith(Path.SEPARATOR) ? path : path + Path.SEPARATOR; //directory
representation of path
for(FileStatus fileStatus:corruptedFileStatuses){
	String currentPath = fileStatus.getPath().toString();
	if(currentPath.equals(path) || currentPath.startsWith(pathdir)){

That looks simpler to me. What do you think?

> Extend DFSck with an option to list corrupt files using API from HDFS-729
> -------------------------------------------------------------------------
>                 Key: HDFS-1032
>                 URL: https://issues.apache.org/jira/browse/HDFS-1032
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: tools
>            Reporter: Rodrigo Schmidt
>            Assignee: André Oriani
>         Attachments: hdfs-1032_aoriani.patch
> HDFS-729 created a new API to namenode that returns the list of corrupt files.
> We can now extend fsck (DFSck.java) to add an option (e.g. --list_corrupt) that queries
the namenode using the new API and lists the corrupt blocks to the users.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message