hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "zhangwei (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-5019) add querying block's info in the fsck facility
Date Tue, 13 Jan 2009 10:01:02 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

zhangwei updated HADOOP-5019:

    Attachment: HADOOP-5019.patch

If the path arg starts with "blk_" ,then fsck examine the followed blkid  and ignore the genernation
Then get the block's inode through blocksMap.getINode(b) ,
and get the inode's filename and it's parnet's filename recursively to fetch the full path
and print out the full path ,datanode's locations and permission status finaly.

Or if the path not start with it,it go to check as nomal.

> add querying block's info in the fsck facility
> ----------------------------------------------
>                 Key: HADOOP-5019
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5019
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: zhangwei
>            Priority: Minor
>         Attachments: HADOOP-5019.patch
>   Original Estimate: 24h
>  Remaining Estimate: 24h
> As now the fsck can do pretty well,but when the developer happened to the log such Block
blk_28622148 is not valid.etc
> We wish to know which file and the datanodes the block belongs to.It  can be solved by
running "bin/hadoop fsck -files -blocks -locations / | grep <blockid>" ,but as mentioned
early in the HADOOP-4945 ,it's not an effective way in a big product cluster.
> so maybe we could do something to let the fsck more convenience .

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message