hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: hadoop filesystem corrupt
Date Fri, 01 Mar 2013 19:32:13 GMT
Hi Mohit,

Is your replication factor really setup to 1?

"Default replication factor:    1"

Also, can you look into you data directories and ensure you always
have the right structur and all the related META files?

JM

2013/3/1 Mohit Vadhera <project.linux.proj@gmail.com>:
> Hi,
>
> While moving the data my data folder didn't move successfully. Now while
> running the FSCK i get the file system is corrupt. Can anybody help me at
> the earliest please. I shall be thankful
>
> $ sudo - hdfs hadoop fsck /
>
> -----------------------------
> -------------------------
>
> ...Status: CORRUPT
>  Total size:    739698399117 B
>  Total dirs:    2179
>  Total files:   9064 (Files currently being written: 1)
>  Total blocks (validated):      17060 (avg. block size 43358640 B)
>   ********************************
>   CORRUPT FILES:        4
>   CORRUPT BLOCKS:       4
>   ********************************
>  Minimally replicated blocks:   17060 (100.0 %)
>  Over-replicated blocks:        0 (0.0 %)
>  Under-replicated blocks:       232 (1.3599062 %)
>  Mis-replicated blocks:         0 (0.0 %)
>  Default replication factor:    1
>  Average block replication:     1.0
>  Corrupt blocks:                4
>  Missing replicas:              2088 (10.904533 %)
>  Number of data-nodes:          1
>  Number of racks:               1
> FSCK ended at Fri Mar 01 13:56:41 EST 2013 in 509 milliseconds
>
>
> The filesystem under path '/' is CORRUPT
>

Mime
View raw message