hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anu Engineer <aengin...@hortonworks.com>
Subject Re: change HDFS disks on each node
Date Fri, 04 Mar 2016 21:25:23 GMT
Hi Joe,

As long as you copy all the data in the old disk without altering the paths ( ie. Structure
and layout of the data dirs) and you use the same version of datanode software then it should

Here is the Apache documentation that implies that this will work, http://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F

In your case all you need to do is :

1. Add the new hard disk
2. Mount the new hard disk — copy the data directories to the new disk
3. Remove the old disk and mount the new drive and make sure that your data directories are
pointing to the new location.

That should do the trick. 

As usual, any advice from a user group carries the risk of data loss.  So please be gentle
with your old disk(s) until you are absolutely sure that new disks are perfectly functional


On 3/4/16, 1:04 PM, "Joseph Naegele" <jnaegele@grierforensics.com> wrote:

>Hi all,
>Each of our N datanodes has attached two 3TB disks. I want to attach new
>*replacement* storage to each node, move the HDFS contents to the new
>storage, and remove the old volumes. We're using Hadoop 2.7.1.
>1. What is the simplest, correct way to do this? Does hot-swapping move data
>from old disks to new disks? I am able to stop the cluster completely.
>2. Is it reasonable to use LVM to create expandable logical volumes? We're
>using AWS and contemplating switching from SSDs to magnetic storage, which
>is limited to 1TB volumes.
>To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
>For additional commands, e-mail: user-help@hadoop.apache.org

To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org
View raw message