hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dlmar...@comcast.net
Subject Re: HDFS 2.6.0 upgrade ends with missing blocks
Date Wed, 07 Jan 2015 17:33:16 GMT
I went back and read through the comments in HDFS-6482 and found that the restrictions (question
#2 below) have already been documented at https://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F


I will update HDFS-1312 with this information. 

----- Original Message -----

From: dlmarion@comcast.net 
To: hdfs-dev@hadoop.apache.org 
Sent: Wednesday, January 7, 2015 8:25:41 AM 
Subject: HDFS 2.6.0 upgrade ends with missing blocks 


I recently upgraded from CDH 5.1.2 to CDH 5.3.0. I know, contact Cloudera, but this is actually
a generic issue. After the upgrade I brought up the DNs and after all of them had checked
in I ended up with missing blocks. I tracked this down in the DN logs to an error at startup
where the DN is failing to create subdirectories. This happens at BlockPoolSliceStorage.doUpgrade().
It appears that the directory structure has changed with HDFS-6482 and the DN is pre-creating
all of the directories at DN startup time. If the disk is near full, then it fails to create
the subdirectories because it consumes the remaining space. If the hdfs configuration allows
failed drives (dfs.datanode.failed.volumes.tolerated > 0), then the DN will start without
the now full disk and report all of the blocks except the ones on the full disk. 

I didn't find any type of warning in the Apache release notes. It might be useful for people
in a similar situation. For the Cloudera folks on this list, there is no warning or note in
your upgrade instructions that I could find either. 

Some questions: 

1. How much free space is needed per disk to pre-create the directory structure. Is it dependent
on the type of filesystem? I calculated 256MB given my reading of the ticket, but I may have
misunderstood something. 

2. Now that block locations are calculated using the block id, are there restrictions on where
blocks can be placed? I assume that the location is not verified on a read for backwards compatibility,
if that is not true, then someone needs to comment on HDFS-1312 that the older utilities cannot
be used. I need to move blocks from the full disks to other locations, I'm looking for any
restrictions in doing that. 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message