Return-Path: Delivered-To: apmail-hadoop-core-commits-archive@www.apache.org Received: (qmail 93932 invoked from network); 13 Nov 2008 07:22:57 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 13 Nov 2008 07:22:57 -0000 Received: (qmail 72167 invoked by uid 500); 13 Nov 2008 07:23:04 -0000 Delivered-To: apmail-hadoop-core-commits-archive@hadoop.apache.org Received: (qmail 72126 invoked by uid 500); 13 Nov 2008 07:23:04 -0000 Mailing-List: contact core-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-commits@hadoop.apache.org Received: (qmail 72117 invoked by uid 99); 13 Nov 2008 07:23:04 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 Nov 2008 23:23:04 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.130] (HELO eos.apache.org) (140.211.11.130) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Nov 2008 07:21:41 +0000 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id 167F911157 for ; Thu, 13 Nov 2008 07:21:55 +0000 (GMT) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Apache Wiki To: core-commits@hadoop.apache.org Date: Thu, 13 Nov 2008 07:21:54 -0000 Message-ID: <20081113072154.29339.42470@eos.apache.org> Subject: [Hadoop Wiki] Trivial Update of "Chukwa Quick Start" by andyk X-Virus-Checked: Checked by ClamAV on apache.org Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification. The following page has been changed by andyk: http://wiki.apache.org/hadoop/Chukwa_Quick_Start ------------------------------------------------------------------------------ == Running Chukwa on a Cluster == The cluster deployment process is still under active development, thus it is possible that the following instructions may not work yet, but they will soon, so please don't delete them. Eventually, even the single machine setup (for newcomers to Chukwa who want to try it out of the box on their) above will be replaced by the below process, renaming the conf/slaves.template and conf/collectors.template files (to remove the .template suffix) for the defaults of localhost for the collector and agent. - 1. Like in Hadoop, you can specify a set of nodes on which you want to run Chukwa agents (similar to conf/slaves in Hadoop) using a conf/slaves file. + 1. Specify which hosts to run collectors on in the conf/collectors file. - 2. Similarly, collectors should be specified using the conf/collectors file. These can be run using bin/start-collectors.sh + 1. Start the collectors in your cluster with the command bin/start-collectors.sh - 3. The local agents on each machine will also reference the conf/collectors file, selecting a collector at random from this list to talk to. Thus, like Hadoop, it is common to run Chukwa from a shared file system where all of the agents (i.e. slaves) can access the same conf files. + 1. Like in Hadoop, you need to specify a set of nodes on which you want to run Chukwa agents (similar to conf/slaves in Hadoop) using a conf/slaves file. The local agents on each machine will also reference the conf/collectors file, selecting a collector at random from this list to talk to. Thus, like Hadoop, it is common to run Chukwa from a shared file system where all of the agents (i.e. slaves) can access the same conf files. + + 1. Start the agents by running bin/start-agents.sh +