hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From C G <parallel...@yahoo.com>
Subject HDFS corrupt...how to proceed?
Date Mon, 12 May 2008 03:23:39 GMT
Hi All:
  We had a primary node failure over the weekend.  When we brought the node back up and I
ran Hadoop fsck, I see the file system is corrupt.  I'm unsure how best to proceed.  Any advice
is greatly appreciated.   If I've missed a Wiki page or documentation somewhere please feel
free to tell me to RTFM and let me know where to look.  
  Specific question:  how to clear under and over replicated files?  Is the correct procedure
to copy the file locally, delete from HDFS, and then copy back to HDFS?
  The fsck output is long, but the final summary is:
   Total size:    4899680097382 B
 Total blocks:  994252 (avg. block size 4928006 B)
 Total dirs:    47404
 Total files:   952070
  CORRUPT FILES:        2
  MISSING BLOCKS:       24
  MISSING SIZE:         1501009630 B
 Over-replicated blocks:        1 (1.0057812E-4 %)
 Under-replicated blocks:       14958 (1.5044476 %)
 Target replication factor:     3
 Real replication factor:       2.9849212
The filesystem under path '/' is CORRUPT

Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.
  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message