hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leo Alekseyev <dnqu...@gmail.com>
Subject HDFS got messed up after changing dfs.name.dir in configs -- how to fix?
Date Wed, 18 Aug 2010 22:04:43 GMT
we are running Hadoop from Cloudera (CDH3b2), and we recently
streamlined some configuration management.
one of the changes we made relocated dfs.name.dir to a new location.

upon cluster restart, we see that:

1) namenode remains locked in safe mode, reporting the following:
The reported blocks 0 needs additional 656829 blocks to reach the
threshold 0.9990 of total blocks 657487. Safe mode will be turned off
automatically.

2) all data on datanodes has been relocated to $dfs.data.dir/toBeDeleted

we have complete archives of old fsimage and edits files.

what is the best way to put the data back in the correct place on
datanodes, and have them report the correct number of blocks to
namenode?

In addition, what was the mistake made here?..  Should we have copied
over ${dfs.name.dir} contents to the new location before specifying it
in the config?..

Mime
View raw message