hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Bockelman <bbock...@cse.unl.edu>
Subject Re: start anyways with missing blocks
Date Fri, 21 Jan 2011 20:16:18 GMT
Hi Mike,

You want to take things out of safemode before you can make these changes.

hadoop dfsadmin -safemode leave

Then you can do the "hadoop fsck / -delete"

Brian

On Jan 21, 2011, at 2:12 PM, mike anderson wrote:

> Also, here's the output of dfsadmin -report.  What seems weird is that it's
> not reporting any missing blocks. BTW, I tried doing fsck / -delete, but it
> failed, complaining about the missing nodes.
> 
> $ ../bin/hadoop dfsadmin -report
> Safe mode is ON
> Configured Capacity: 3915872829440 (3.56 TB)
> Present Capacity: 2913577631744 (2.65 TB)
> DFS Remaining: 1886228164608 (1.72 TB)
> DFS Used: 1027349467136 (956.79 GB)
> DFS Used%: 35.26%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> 
> -------------------------------------------------
> Datanodes available: 9 (9 total, 0 dead)
> 
> Name: 10.0.16.91:50010
> Decommission Status : Normal
> Configured Capacity: 139438620672 (129.86 GB)
> DFS Used: 44507017216 (41.45 GB)
> Non DFS Used: 85782597632 (79.89 GB)
> DFS Remaining: 9149005824(8.52 GB)
> DFS Used%: 31.92%
> DFS Remaining%: 6.56%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.165:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 139728683008 (130.13 GB)
> Non DFS Used: 90374217728 (84.17 GB)
> DFS Remaining: 241951375360(225.33 GB)
> DFS Used%: 29.6%
> DFS Remaining%: 51.25%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.163:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 174687391744 (162.69 GB)
> Non DFS Used: 55780028416 (51.95 GB)
> DFS Remaining: 241586855936(225 GB)
> DFS Used%: 37.01%
> DFS Remaining%: 51.18%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.164:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 95075942400 (88.55 GB)
> Non DFS Used: 182544318464 (170.01 GB)
> DFS Remaining: 194434015232(181.08 GB)
> DFS Used%: 20.14%
> DFS Remaining%: 41.19%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.169:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 24576 (24 KB)
> Non DFS Used: 51301322752 (47.78 GB)
> DFS Remaining: 420752928768(391.86 GB)
> DFS Used%: 0%
> DFS Remaining%: 89.13%
> Last contact: Fri Jan 21 15:10:48 EST 2011
> 
> 
> Name: 10.0.16.160:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 171275218944 (159.51 GB)
> Non DFS Used: 119652265984 (111.43 GB)
> DFS Remaining: 181126791168(168.69 GB)
> DFS Used%: 36.28%
> DFS Remaining%: 38.37%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.161:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 131355377664 (122.33 GB)
> Non DFS Used: 174232702976 (162.27 GB)
> DFS Remaining: 166466195456(155.03 GB)
> DFS Used%: 27.83%
> DFS Remaining%: 35.26%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.162:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 139831177216 (130.23 GB)
> Non DFS Used: 91403055104 (85.13 GB)
> DFS Remaining: 240820043776(224.28 GB)
> DFS Used%: 29.62%
> DFS Remaining%: 51.02%
> Last contact: Fri Jan 21 15:10:47 EST 2011
> 
> 
> Name: 10.0.16.167:50010
> Decommission Status : Normal
> Configured Capacity: 472054276096 (439.63 GB)
> DFS Used: 130888634368 (121.9 GB)
> Non DFS Used: 151224688640 (140.84 GB)
> DFS Remaining: 189940953088(176.9 GB)
> DFS Used%: 27.73%
> DFS Remaining%: 40.24%
> Last contact: Fri Jan 21 15:10:46 EST 2011
> 
> 
> On Fri, Jan 21, 2011 at 3:03 PM, mike anderson <saidtherobot@gmail.com>wrote:
> 
>> After a tragic cluster crash it looks like some blocks are missing.
>> 
>> Total size: 343918527293 B (Total open files size: 67108864 B)
>> Total dirs: 5897
>> Total files: 5574 (Files currently being written: 19)
>> Total blocks (validated): 9441 (avg. block size 36428188 B) (Total open
>> file blocks (not validated): 1)
>>  ********************************
>>  CORRUPT FILES: 319
>>  MISSING BLOCKS: 691
>>  MISSING SIZE: 32767071153 B
>>  CORRUPT BLOCKS: 691
>>  ********************************
>> Minimally replicated blocks: 8750 (92.68086 %)
>> Over-replicated blocks: 0 (0.0 %)
>> Under-replicated blocks: 0 (0.0 %)
>> Mis-replicated blocks: 0 (0.0 %)
>> Default replication factor: 2
>> Average block replication: 2.731914
>> Corrupt blocks: 691
>> Missing replicas: 0 (0.0 %)
>> Number of data-nodes: 9
>> Number of racks: 1
>> 
>> 
>> The filesystem under path '/' is CORRUPT
>> 
>> 
>> 
>> I don't particularly care if I lose some of the data (it's just a cache
>> store), instead of figuring out where the blocks went missing can I just
>> forget about them and boot up with the blocks I have?
>> 
>> -Mike
>> 


Mime
View raw message