hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From elsif <elsif.t...@gmail.com>
Subject Handling datanode disk failures
Date Fri, 31 Jul 2009 16:47:32 GMT
What is the recommended procedure for dealing with single disk failures
on a datanode with multiple disks?

Lets say we have a node with four disks listed as /mnt/disk1,
/mnt/disk2, /mnt/disk3, /mnt/disk4 in the dfs.data.dir property.  After
running for a few months one of the disks (/mnt/disk3) fails.   Hadoop
keeps running using replicas from other nodes, but what steps should be
taken to replace the disk?  Should the new disk have the same mount
point or be assigned a new one (/mnt/disk3replace)?

Thanks for your help!

Mime
View raw message