hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: dfs.name.dir capacity for namenode backup?
Date Tue, 18 May 2010 06:14:48 GMT
On Mon, May 17, 2010 at 5:10 PM, jiang licht <licht_jiang@yahoo.com> wrote:

> I am considering to use a machine to save a
> redundant copy of HDFS metadata through setting dfs.name.dir in
> hdfs-site.xml like this (as in YDN):
> <property>
>    <name>dfs.name.dir</name>
>    <value>/home/hadoop/dfs/name,/mnt/namenode-backup</value>
>    <final>true</final>
> </property>
> where the two folders are on different machines so that
> /mnt/namenode-backup keeps a copy of hdfs file system information and its
> machine can be used to replace the first machine that fails as namenode.
> So, my question is how big this hdfs metatdata will consume? I guess it is
> proportional to the hdfs capacity. What ratio is that or what size will be
> for 150TB hdfs?

On the order of a few GB, max (you really need double the size of your
image, so it has tmp space when downloading a checkpoint or performing an
upgrade). But on any disk you can buy these days you'll have plenty of


Todd Lipcon
Software Engineer, Cloudera

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message