hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinay (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-5728) [Diskfull] Block recovery will fail if the metafile not having crc for all chunks of the block
Date Wed, 08 Jan 2014 04:36:52 GMT
Vinay created HDFS-5728:
---------------------------

             Summary: [Diskfull] Block recovery will fail if the metafile not having crc for
all chunks of the block
                 Key: HDFS-5728
                 URL: https://issues.apache.org/jira/browse/HDFS-5728
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: datanode
    Affects Versions: 2.2.0
            Reporter: Vinay
            Assignee: Vinay


1. Client (regionsever) has opened stream to write its WAL to HDFS. This is not one time upload,
data will be written slowly.
2. One of the DataNode got diskfull ( due to some other data filled up disks)
3. Unfortunately block was being written to only this datanode in cluster, so client write
has also failed.

4. After some time disk is made free and all processes are restarted.
5. Now HMaster try to recover the file by calling recoverLease. 
At this time recovery was failing saying file length mismatch.

When checked,
 actual block file length: 62484480
 Calculated block length: 62455808

This was because, metafile was having crc for only 62455808 bytes, and it considered 62455808
as the block size.

No matter how many times, recovery was continously failing.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message