hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gruust (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-12649) handling of corrupt blocks not suitable for commodity hardware
Date Thu, 12 Oct 2017 20:48:00 GMT
Gruust created HDFS-12649:

             Summary: handling of corrupt blocks not suitable for commodity hardware
                 Key: HDFS-12649
                 URL: https://issues.apache.org/jira/browse/HDFS-12649
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: namenode
    Affects Versions: 2.8.1
            Reporter: Gruust
            Priority: Minor

Hadoop's documentation tells me it's suitable for commodity hardware in the sense that hardware
failures are expected to happen frequently. However, there is currently no automatic handling
of corrupted blocks, which seems a bit contradictory to me.

See: https://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hdfs-files

This is even problematic for data integrity as the redundancy is not kept at the desired level
without manual intervention. If there is a corrupted block, I would at least expect that the
namenode forces the creation of an additional good replica to keep up the redundancy level.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org

View raw message