hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pradeep Bhadani (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8126) hadoop fsck does not correctly check for corrupt blocks for a file
Date Fri, 10 Apr 2015 18:26:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14490067#comment-14490067

Pradeep Bhadani commented on HDFS-8126:

It make sense to run block scanner every 3weeks as it is costly operation.

FSCK takes 3week old block scanner report to say about Healthiness of current state of cluster
 which might not be correct (as in my test case).

So  when we prepare for cluster upgrade,  FSCK command is not enough to say that current state
of system is healthy.

We can close this ticket as this was the expected behavior of FSCK..

> hadoop fsck does not correctly check for corrupt blocks for a file
> ------------------------------------------------------------------
>                 Key: HDFS-8126
>                 URL: https://issues.apache.org/jira/browse/HDFS-8126
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS, hdfs-client
>    Affects Versions: 2.3.0
>            Reporter: Pradeep Bhadani
> hadoop fsck does not correctly check for corrupt blocks for a file until we try to read
that file.
> Test steps (Followed on Cloudera CDH5.1 single node VM and Hortonworks HDP2.2 single
node VM ) : 
> 1. Uploaded a files "test.txt" to /user/abc/test.txt on HDFS
> 2. Ran "hadoop fsck  /user/abc/test.txt -files -blocks " command to check file integrity
and retrieve block id.
> 3. Search for the block file location  at linux filesystem level.
> 4. Manually edit the block file.
> 5. Re-run the fsck command "hadoop fsck /user/abc/test.txt".
> 6. At this stage , FSCK still shows that files in HEALTHY state.
> 7. Waited for more than 30 sec to re-run FSCK test and still shows healthy state.
> 8. Try to read file "hadoop fs -cat /user/abc/test.txt" . Thsi command failes with an
error of mis-match in checksum (as expected).
> 9. re-run FSCK. Now FSCK show that 1 block is corrupt.
> 10. Manually edit the file and restore to previous state.
> 11. Try to cat file. It works.
> 12. Run FSCK test. Still fails

This message was sent by Atlassian JIRA

View raw message