Return-Path: X-Original-To: apmail-hadoop-common-commits-archive@www.apache.org Delivered-To: apmail-hadoop-common-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1349C7880 for ; Sun, 27 Nov 2011 19:32:40 +0000 (UTC) Received: (qmail 34824 invoked by uid 500); 27 Nov 2011 19:32:39 -0000 Delivered-To: apmail-hadoop-common-commits-archive@hadoop.apache.org Received: (qmail 34797 invoked by uid 500); 27 Nov 2011 19:32:39 -0000 Mailing-List: contact common-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-commits@hadoop.apache.org Received: (qmail 34790 invoked by uid 500); 27 Nov 2011 19:32:39 -0000 Delivered-To: apmail-hadoop-core-commits@hadoop.apache.org Received: (qmail 34787 invoked by uid 99); 27 Nov 2011 19:32:39 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 27 Nov 2011 19:32:39 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.131] (HELO eos.apache.org) (140.211.11.131) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 27 Nov 2011 19:32:37 +0000 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id BD8AEC5B; Sun, 27 Nov 2011 19:32:15 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Apache Wiki To: Apache Wiki Date: Sun, 27 Nov 2011 19:32:15 -0000 Message-ID: <20111127193215.17287.37671@eos.apache.org> Subject: =?utf-8?q?=5BHadoop_Wiki=5D_Update_of_=22Chukwa=5FQuick=5FStart=22_by_Eri?= =?utf-8?q?cYang?= Auto-Submitted: auto-generated X-Virus-Checked: Checked by ClamAV on apache.org Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for ch= ange notification. The "Chukwa_Quick_Start" page has been changed by EricYang: http://wiki.apache.org/hadoop/Chukwa_Quick_Start?action=3Ddiff&rev1=3D44&re= v2=3D45 =3D=3D Pre-requisites =3D=3D Chukwa should work on any POSIX platform, but GNU/Linux is the only produ= ction platform that has been tested extensively. Chukwa has also been used = successfully on Mac OS X, which several members of the Chukwa team use for = development. = - The only absolute software requirements are Java 1.6 or better and Hadoop= 0.20.203+. HICC, the Chukwa visualization interface, requires HBase 0.90.3. + The only absolute software requirements are Java 1.6 or better and Hadoop= 0.20.205+. HICC, the Chukwa visualization interface, requires HBase 0.90.4. = The Chukwa cluster management scripts rely on ssh; these scripts, however= , are not required if you have some alternate mechanism for starting and st= opping daemons. = @@ -24, +24 @@ {{http://people.apache.org/~eyang/docs/chukwa-0.5-arch.png}} = =3D=3D Compiling and installing Chukwa =3D=3D + 1. To compile Chukwa, just type 'mvn clean package -DskipTests -DTODO_HB= ASE_CONF_DIR=3D/path/to/$HBASE_CONF_DIR' in the project root directory. - 1. Copy hbase-0.90.3.jar, hbase-0.90.3-test.jar and HBASE_HOME/lib/zooke= eper-*.jar to CHUKWA_HOME/lib - 1. To compile Chukwa, just say ''ant tar'' in the project root directory. - 1. Extract the compiled tar file from build/chukwa-0.x.y.tar.gz to the C= hukwa root directory. + 1. Extract the compiled tar file from target/chukwa-0.x.y.tar.gz to the = Chukwa root directory. = =3D=3D Setup Chukwa Cluster =3D=3D General Hadoop configuration is available at: [[http://hadoop.apache.org/= common/docs/current/cluster_setup.html|Hadoop Configuration]] @@ -43, +42 @@ }}} Save the file. 1. Copy CHUKWA_HOME/hadoop-metrics.properties to HADOOP_CONF_DIR. - 1. Copy CHUKWA_HOME/chukwa-hadoop-0.5.0-client.jar to HADOOP_HOME/lib. + 1. Copy CHUKWA_HOME/share/chukwa/chukwa-0.5.0-client.jar to HADOOP_HOME/= share/hadoop/lib. - 1. Copy CHUKWA_HOME/lib/json-simple-1.1.jar to HADOOP_HOME/lib. + 1. Copy CHUKWA_HOME/share/chukwa/lib/json-simple-1.1.jar to HADOOP_HOME/= share/hadoop/lib. 1. Restart Hadoop Cluster. 1. General HBASE configuration is available at: [[http://hbase.apache.o= rg/docs/current/api/overview-summary.html#overview_description|HBase Config= uration]] 1. After Hadoop and HBase has been configured properly, run: = @@ -54, +53 @@ This procedure initializes the default Chukwa HBase schema. = =3D=3D Configuring and starting the Collector =3D=3D - 1. Copy conf/chukwa-collector-conf.xml.template to conf/chukwa-collector= -conf.xml 1. Edit conf/chukwa-collector-conf.xml and comment out the default prope= rties for chukwaCollector.writerClass, and chukwaCollector.pipeline. Uncom= ment block for HBaseWriter parameters, and save. - 1. If you're running HBase in distributed mode, copy your hadoop-site.xm= l, hbase-site.xml file to the collectors conf/ directory. At a minimum, thi= s file must contain a setting for hbase.zookeeper.quorum. - 1. Copy conf/chukwa-env.sh-template to conf/chukwa-env.sh. 1. Edit chukwa-env.sh. You almost certainly need to set JAVA_HOME, HADO= OP_HOME, HADOOP_CONF_DIR, HBASE_HOME, and HBASE_CONF_DIR at least. - 1. In the chukwa root directory, say bash bin/chukwa collector'' '' + 1. In the chukwa root directory, run 'bin/chukwa collector' = =3D=3D Configuring and starting the local agent =3D=3D - 1. ''Copy conf/chukwa-agent-conf.xml.template to conf/chukwa-agent-conf.= xml '' - 1. ''Copy conf/collectors.template to conf/collectors '' + 1. Verify etc/chukwa/chukwa-agent-conf.xml configuration + 1. Verify etc/chukwa/collectors contains list of collector hostname - 1. ''In the chukwa root directory, say bash bin/chukwa agent'' + 1. In the chukwa root directory, run 'bin/chukwa agent' = =3D=3D Starting Adaptors =3D=3D The local agent speaks a simple text-based protocol, by default over port= 9093. Suppose you want Chukwa to monitor system metrics, hadoop metrics, a= nd hadoop logs on the localhost: