hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1403) add -truncate option to fsck
Date Thu, 16 Sep 2010 04:01:33 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12910003#action_12910003

dhruba borthakur commented on HDFS-1403:

This is especially needed when the system supports hflush. A client could issue a hflush,
it will persist block locations in the namenode. Then the client could fail even before it
could write any bytes to that block. In this case, the last block of the file will be permanently
missing. It would be nice to have an option to fsck to delete the last block of a file if
it is of size zero and does not have any valid replicas.

> add -truncate option to fsck
> ----------------------------
>                 Key: HDFS-1403
>                 URL: https://issues.apache.org/jira/browse/HDFS-1403
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs client, name-node
>            Reporter: sam rash
> When running fsck, it would be useful to be able to tell hdfs to truncate any corrupt
file to the last valid position in the latest block.  Then, when running hadoop fsck, an admin
can cleanup the filesystem.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message