hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Trivial Update of "Hadoop Upgrade" by KonstantinShvachko
Date Mon, 11 Sep 2006 18:20:19 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by KonstantinShvachko:
http://wiki.apache.org/lucene-hadoop/Hadoop_Upgrade

------------------------------------------------------------------------------
   7. Optionally, repeat 3, 4, 5, and compare the results with the previous run to ensure
the state of the file system remained unchanged.
   8. Copy the following checkpoint files into a backup directory: [[BR]] {{{dfs.name.dir/edits}}}
[[BR]] {{{dfs.name.dir/image/fsimage}}}
   9. Stop DFS cluster. [[BR]] {{{bin/stop-dfs.sh}}} [[BR]] Verify that DFS has really stopped,
and there are no Data``Node processes running on any nodes.
-  10. Install new version of Hadoop software. See HowToInstall and HowToConfigure for details.
+  10. Install new version of Hadoop software. See GettingStartedWithHadoop and HowToConfigure
for details.
   11. Optionally, update the {{{conf/slaves}}} file before starting, to reflect the current
set of active nodes.
   12. Optionally, change the configuration of the name node’s and the job tracker’s port
numbers, to ignore unreachable nodes that are running the old version, preventing them from
connecting and disrupting system operation. [[BR]] {{{fs.default.name}}} [[BR]] {{{mapred.job.tracker}}}
   13. Optionally, start name node only. [[BR]] {{{bin/hadoop-daemon.sh start namenode}}}
[[BR]] This should convert the checkpoint to the new version format.

Mime
View raw message