hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "Chukwa Quick Start" by andyk
Date Tue, 11 Nov 2008 09:13:31 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by andyk:

   1. In the chukwa root directory, say ``bash bin/processSinkFiles.sh'' 
   1. (Document TODO: This script has a hard-coded 54310 port.  Can you confirm that you must
be running hdfs from 54310?)
- == Running Chukwa -- Networked ==
+ == Running Chukwa on a Cluster ==
+ The cluster deployment process is still under active development, thus it is possible that
the following instructions may not work yet, but they will soon, so please don't delete them.
Eventually, even the single machine setup (for newcomers to Chukwa who want to try it out
of the box on their) above will be replaced by the below process, renaming the conf/slaves.template
and conf/collectors.template files (to remove the .template suffix) for the defaults of localhost
for the collector and agent.
- Running Chukwa in a networked context is essentially similar to the single-machine deployment
discussed above.  However, in a network context, you would also need to tell the local agent
where the collector[s] live, by listing them in conf/collectors.
+ 1. Like in Hadoop, you can specify a set of nodes on which you want to run Chukwa agents
(similar to conf/slaves in Hadoop) using a conf/slaves file.
+ 2. Similarly, collectors should be specified using the conf/collectors file. These can be
run using bin/start-collectors.sh
+ 3. The local agents on each machine will also reference the conf/collectors file, selecting
a collector at random from this list to talk to. Thus, like Hadoop, it is common to run Chukwa
from a shared file system where all of the agents (i.e. slaves) can access the same conf files.

View raw message