hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <tdunn...@veoh.com>
Subject Re: HDFS corrupt...how to proceed?
Date Mon, 12 May 2008 16:56:26 GMT


You don't need to correct over-replicated files.

The under-replicated files should cure themselves, but there is a problem on
old versions where that doesn't happen quite right.

You can use hadoop fsck / to get a list of the files that are broken and
there are options to copy what remains of them to lost+found or to delete
them.

Other than that, things should correct themselves fairly quickly.


On 5/11/08 8:23 PM, "C G" <parallelguy@yahoo.com> wrote:

> Hi All:
>    
>   We had a primary node failure over the weekend.  When we brought the node
> back up and I ran Hadoop fsck, I see the file system is corrupt.  I'm unsure
> how best to proceed.  Any advice is greatly appreciated.   If I've missed a
> Wiki page or documentation somewhere please feel free to tell me to RTFM and
> let me know where to look.
>    
>   Specific question:  how to clear under and over replicated files?  Is the
> correct procedure to copy the file locally, delete from HDFS, and then copy
> back to HDFS?
>    
>   The fsck output is long, but the final summary is:
>    
>    Total size:    4899680097382 B
>  Total blocks:  994252 (avg. block size 4928006 B)
>  Total dirs:    47404
>  Total files:   952070
>   ********************************
>   CORRUPT FILES:        2
>   MISSING BLOCKS:       24
>   MISSING SIZE:         1501009630 B
>   ********************************
>  Over-replicated blocks:        1 (1.0057812E-4 %)
>  Under-replicated blocks:       14958 (1.5044476 %)
>  Target replication factor:     3
>  Real replication factor:       2.9849212
>   
> The filesystem under path '/' is CORRUPT
> 
>        
> ---------------------------------
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it
> now.


Mime
View raw message