hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pradeep Bhadani (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-8126) hadoop fsck does not correctly check for corrupt blocks for a file
Date Fri, 10 Apr 2015 14:13:12 GMT
Pradeep Bhadani created HDFS-8126:
-------------------------------------

             Summary: hadoop fsck does not correctly check for corrupt blocks for a file
                 Key: HDFS-8126
                 URL: https://issues.apache.org/jira/browse/HDFS-8126
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: HDFS, hdfs-client
    Affects Versions: 2.3.0
            Reporter: Pradeep Bhadani


hadoop fsck does not correctly check for corrupt blocks for a file until we try to read that
file.

Test steps (Followed on Cloudera CDH5.1 single node VM and Hortonworks HDP2.2 single node
VM ) : 
1. Uploaded a files "test.txt" to /user/abc/test.txt on HDFS
2. Ran "hadoop fsck  /user/abc/test.txt -files -blocks " command to check file integrity and
retrieve block id.
3. Search for the block file location  at linux filesystem level.
4. Manually edit the block file.
5. Re-run the fsck command "hadoop fsck /user/abc/test.txt".
6. At this stage , FSCK still shows that files in HEALTHY state.
7. Waited for more than 30 sec to re-run FSCK test and still shows healthy state.
8. Try to read file "hadoop fs -cat /user/abc/test.txt" . Thsi command failes with an error
of mis-match in checksum (as expected).
9. re-run FSCK. Now FSCK show that 1 block is corrupt.
10. Manually edit the file and restore to previous state.
11. Try to cat file. It works.
12. Run FSCK test. Still fails




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message