hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ch huang <justlo...@gmail.com>
Subject Re: how to handle the corrupt block in HDFS?
Date Tue, 10 Dec 2013 01:15:27 GMT
the strange thing is when i use the following command i find 1 corrupt block

#  curl -s http://ch11:50070/jmx |grep orrupt
    "CorruptBlocks" : 1,
but when i run hdfs fsck / , i get none ,everything seems fine

# sudo -u hdfs hdfs fsck /
........

....................................Status: HEALTHY
 Total size:    1479728140875 B (Total open files size: 1677721600 B)
 Total dirs:    21298
 Total files:   100636 (Files currently being written: 25)
 Total blocks (validated):      119788 (avg. block size 12352891 B) (Total
open file blocks (not validated): 37)
 Minimally replicated blocks:   119788 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       166 (0.13857816 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     3.0027633
 Corrupt blocks:                0
 Missing replicas:              831 (0.23049656 %)
 Number of data-nodes:          5
 Number of racks:               1
FSCK ended at Tue Dec 10 09:14:48 CST 2013 in 3276 milliseconds

The filesystem under path '/' is HEALTHY


On Tue, Dec 10, 2013 at 8:32 AM, ch huang <justlooks@gmail.com> wrote:

> hi,maillist:
>             my nagios alert me that there is a corrupt block in HDFS all
> day,but i do not know how to remove it,and if the HDFS will handle this
> automaticlly? and if remove the corrupt block will cause any data
> lost?thanks
>

Mime
View raw message