hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang" <hair...@yahoo-inc.com>
Subject RE: [jira] Resolved: (HADOOP-731) Sometimes when a dfs file is accessed and one copy has a checksum error the I/O command fails, even if another copy is alright.
Date Thu, 04 Jan 2007 22:36:17 GMT
I feel that HADOOP-731 is not a duplicate of HADOOP-855. The proposal to
HADOOP-855 is to report to the namenode to delete the corrupted data
block/checksum block. The solution helps the next read get the correct data,
but the current read still throws a checksum error and thus fails the cp/get
operation that calls read.

Hairong

-----Original Message-----
From: Sameer Paranjpye (JIRA) [mailto:jira@apache.org] 
Sent: Thursday, January 04, 2007 1:45 PM
To: hadoop-dev@lucene.apache.org
Subject: [jira] Resolved: (HADOOP-731) Sometimes when a dfs file is accessed
and one copy has a checksum error the I/O command fails, even if another
copy is alright.


     [
https://issues.apache.org/jira/browse/HADOOP-731?page=com.atlassian.jira.plu
gin.system.issuetabpanels:all-tabpanel ]

Sameer Paranjpye resolved HADOOP-731.
-------------------------------------

    Resolution: Duplicate

Duplicated in HADOOP-855

> Sometimes when a dfs file is accessed and one copy has a checksum error
the I/O command fails, even if another copy is alright.
> ----------------------------------------------------------------------
> ---------------------------------------------------------
>
>                 Key: HADOOP-731
>                 URL: https://issues.apache.org/jira/browse/HADOOP-731
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.7.2
>            Reporter: Dick King
>         Assigned To: Sameer Paranjpye
>
> for a particular file [alas, the file no longer exists -- I had to
progress]  
>     $dfs -cp foo bar        
> and
>     $dfs -get foo local
> failed on a checksum error.  The dfs browser's download function retrieved
the file, so either that function doesn't check, or more likely the download
function got a different copy.
> When a checksum fails on one copy of a file that is redundantly stored, I
would prefer that dfs try a different copy, mark the bad one as not existing
[which should induce a fresh copy being made from one of the good copies
eventually], and make the call continue to work and deliver bytes.
> Ideally, if all copies have checksum errors but it's possible to piece
together a good copy I would like that to be done.
> -dk

--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        



Mime
View raw message