hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amar Kamat <ama...@yahoo-inc.com>
Subject Re: DFS capacity
Date Mon, 17 Mar 2008 04:47:57 GMT
On Sat, 15 Mar 2008, Zhu Huijun wrote:

> Hi,
>
> I have a question about the DFS capacity. Our cluster has 16 nodes, each has
> a 250GB hard drive. I use one node as the namenode, and other 15 as data
> nodes. However, from the webpage of http://localhost:50070, it is shown that
> each node has only 7.69 GB, and 2.75 GB remaining. I use the default
I guess DFS uses 'df -k' for space computation.
> "/tmp/hadoop-${user.name}" as the base directory (the first property in
> hadoop-default.xml). I tried in hadoop-site.xml to change this directory to
> my account directory "/home/${user.name}/hadoop", but only one node can be
> initialed as datanode,
This changed hadoop-site.xml should be present on all the nodes. Also the
new hadoop folder should exist on each node. I guess the only node that
got initialized is the node with the new hadoop-site.xml.
Amar
> although I can see a more than 200 GB capacity for
> the node. Can anyone give some suggestion on how to make a larger space for
> DFS? Do I need an account with root authority?
>
> Thanks!
>
> Huijun Zhu
>

Mime
View raw message