hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhu Huijun" <nautilu...@gmail.com>
Subject DFS capacity
Date Sat, 15 Mar 2008 17:39:22 GMT
Hi,

I have a question about the DFS capacity. Our cluster has 16 nodes, each has
a 250GB hard drive. I use one node as the namenode, and other 15 as data
nodes. However, from the webpage of http://localhost:50070, it is shown that
each node has only 7.69 GB, and 2.75 GB remaining. I use the default
"/tmp/hadoop-${user.name}" as the base directory (the first property in
hadoop-default.xml). I tried in hadoop-site.xml to change this directory to
my account directory "/home/${user.name}/hadoop", but only one node can be
initialed as datanode, although I can see a more than 200 GB capacity for
the node. Can anyone give some suggestion on how to make a larger space for
DFS? Do I need an account with root authority?

Thanks!

Huijun Zhu

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message