hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marco Nicosia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2063) Command to pull corrupted files
Date Mon, 28 Jan 2008 17:29:34 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12563213#action_12563213

Marco Nicosia commented on HADOOP-2063:

I'm disappointed this didn't go in for Hadoop 0.16, despite having been set as target for
0.16. I understand that this is a pretty big change, but I really want to be sure we get something
for Hadoop 0.17.

While we wait for this, any old Hadoop DFS' with corrupted files will need to sit, waiting
for their owners to have a way to retrieve the files. For that time fsck will always return
corrupt. The inability to do anything with these files (except delete them) could be masking
us from being able to detect other hadoop issues.

> Command to pull corrupted files
> -------------------------------
>                 Key: HADOOP-2063
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2063
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: fs
>    Affects Versions: 0.14.1
>            Reporter: Koji Noguchi
>            Priority: Blocker
>             Fix For: 0.17.0
> Before 0.14, dfs -get didn't perform checksum checking.   
> Users were able to download the corrupted files to see if they want to delete them.
> After 0.14, dfs -get also does the checksumming. 
> Requesting a command for no-checksum-get command.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message