hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marco Cadetg <ma...@zattoo.com>
Subject best way to replace disks on a small cluster
Date Wed, 07 Sep 2011 13:19:34 GMT
Hi there,

Current situation:
3 slaves with each two 320GB disks in RAID 1. All the disks show high read
errors and io throughput has gone below 5Mb/s without running any hadoop
job. (It looks like it will fall apart soon...)

What is the best way to replace the bad disks? I may be able to add another
two machines into the mix. I can't / won't rebuild the RAID as my new disks
will be 2TB each, so I wouldn't like to use only 320GB of them.

Is the best way to add two new nodes into the mix and then mark two other
machines to dfs.host.exclude. And after some time I can take them out???

Thanks for your help,
-Marco

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message