hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dick King (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-731) Sometimes when a dfs file is accessed and one copy has a checksum error the I/O command fails, even if another copy is alright.
Date Thu, 16 Nov 2006 19:23:37 GMT
Sometimes when a dfs file is accessed and one copy has a checksum error the I/O command fails,
even if another copy is alright.
-------------------------------------------------------------------------------------------------------------------------------

                 Key: HADOOP-731
                 URL: http://issues.apache.org/jira/browse/HADOOP-731
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.7.2
            Reporter: Dick King


for a particular file [alas, the file no longer exists -- I had to progress]  

    $dfs -cp foo bar        

and

    $dfs -get foo local

failed on a checksum error.  The dfs browser's download function retrieved the file, so either
that function doesn't check, or more likely the download function got a different copy.

When a checksum fails on one copy of a file that is redundantly stored, I would prefer that
dfs try a different copy, mark the bad one as not existing [which should induce a fresh copy
being made from one of the good copies eventually], and make the call continue to work and
deliver bytes.

Ideally, if all copies have checksum errors but it's possible to piece together a good copy
I would like that to be done.

-dk


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message