hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eli Collins <...@cloudera.com>
Subject Re: dfs.data.dir
Date Wed, 21 Apr 2010 21:33:56 GMT
Hey Mag,

You can bring down the datanode daemon, add the extra dfs.data.dir and
then restart. Since blocks are round robin'd the new directory will
have lower utilization (one other directories are full it will start
catching up). If that's not OK you can re-balance the directories by
hand with cp when the datanode is down (before you restart it).  If
this takes you longer than 10 minutes the blocks on that datanode will
start getting re-replicated but when you bring the datanode back up
the namenode will notice the over-replicated blocks and remove them.


On Wed, Apr 21, 2010 at 4:09 AM, Mag Gam <magawake@gmail.com> wrote:
> I would like to add/remove more data directories to my hdfs installation.
> Currently, what I do is decommision the entire node and then remove
> all content from dfs.data.dir and renable the node. But is there an
> easier way? Each of my node consists of 2TB of data and I don't want
> to waste the time...

View raw message