hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "Hadoop Upgrade" by TorstenCurdt
Date Fri, 24 Aug 2007 19:27:34 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by TorstenCurdt:
http://wiki.apache.org/lucene-hadoop/Hadoop_Upgrade

------------------------------------------------------------------------------
   1. Stop map-reduce cluster(s) [[BR]] {{{bin/stop-mapred.sh}}} [[BR]] and all client applications
running on the DFS cluster.
   2. Run {{{fsck}}} command: [[BR]] {{{bin/hadoop fsck / -files –blocks –locations >
dfs-v-old-fsck-1.log}}} [[BR]] Fix DFS to the point there are no errors. The resulting file
will contain complete block map of the file system. [[BR]] Note. Redirecting the {{{fsck}}}
output is recommend for large clusters in order to avoid time consuming output to stdout.
   3. Run {{{lsr}}} command: [[BR]] {{{bin/hadoop dfs -lsr / > dfs-v-old-lsr-1.log}}} [[BR]]
The resulting file will contain complete namespace of the file system.
-  4. Run {{{report}}} command to create a list of data nodes participating in the cluster.
[[BR]] {{{bin/hadoop dfs -report > dfs-v-old-report-1.log}}}
+  4. Run {{{report}}} command to create a list of data nodes participating in the cluster.
[[BR]] {{{bin/hadoop dfsadmin -report > dfs-v-old-report-1.log}}}
   5. Optionally, copy all or unrecoverable only data stored in DFS to a local file system
or a backup instance of DFS.
   6. Optionally, stop and restart DFS cluster, in order to create an up-to-date namespace
checkpoint of the old version. [[BR]] {{{bin/stop-dfs.sh}}} [[BR]] {{{bin/start-dfs.sh}}}
   7. Optionally, repeat 3, 4, 5, and compare the results with the previous run to ensure
the state of the file system remained unchanged.

Mime
View raw message