hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <michael_se...@hotmail.com>
Subject Re: Replacing a hard drive on a slave
Date Wed, 28 Nov 2012 15:41:02 GMT
Silly question, why are you worrying about this?

In a production the odds of getting a replacement disk in service within 10 minutes after
a fault is detected is highly improbable. 

Why do you care that the blocks are replicated to another node? 
After you replace the disk, bounce the node (restart DN) (RS if running) , you can always
force a rebalance of the cluster. 

On Nov 28, 2012, at 9:22 AM, Mark Kerzner <mark.kerzner@shmsoft.com> wrote:

> What happens if I stop the datanode, miss the 10 min 30 seconds deadline, and restart
the datanode say 30 minutes later? Will Hadoop re-use the data on this datanode, balancing
it with HDFS? What happens to those blocks that correspond to file that have been updated
> Mark
> On Wed, Nov 28, 2012 at 6:51 AM, Stephen Fritz <stephenf@cloudera.com> wrote:
> HDFS will not start re-replicating blocks from a dead DN for 10 minutes 30 seconds by
> Right now there isn't a good way to replace a disk out from under a running datanode,
so the best way is:
> - Stop the DN
> - Replace the disk
> - Restart the DN
> On Wed, Nov 28, 2012 at 9:14 AM, Mark Kerzner <mark.kerzner@shmsoft.com> wrote:
> Hi,
> can I remove one hard drive from a slave but tell Hadoop not to replicate missing blocks
for a few minutes, because I will return it back? Or will this not work at all, and will Hadoop
continue replicating, since I removed blocks, even for a short time?
> Thank you. Sincerely,
> Mark

View raw message