hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: Adding hard-disks to an existing HDFS cluster
Date Mon, 01 Mar 2010 10:48:32 GMT
Eli Collins wrote:
>> I presume it makes no sense to try to spread the NameNode across multiple
>> disks?
> 
> Not quite sure what you mean here, but dfs.name.dir (where the NN
> stores its metadata) should have multiple directories on different
> disks to guard against the failure of any single disk. Many people
> also use RAIDed disks and include an NFS mount in dfs.name.dir to have
> additional, reliable copies of this data.

Best of all: a secondary namenode to get the streamed event log, as that 
will mean your cluster restarts faster. You do not want to lose your NN 
data.


We've discussed making it easier to hot-swap a disk drive on a live 
datanode, that could be simplified by having the DN take all data off 
that disks onto its other disks (if there is room), and then copying it 
back in later. Nobody has done any work on this yet:
http://issues.apache.org/jira/browse/HDFS-664

Mime
View raw message