hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bill Graham <billgra...@gmail.com>
Subject Re: Chukwa questions
Date Fri, 09 Jul 2010 16:18:12 GMT
Your understanding of how Chukwa works is correct.

"Hadoop by itself" is a system that contains both the HDFS and the MapReduce
systems. The other projects you lists are all projects built upon Hadoop,
but you don't need them to run or to get value out of Hadoop by itself.

To run the Chukwa agent on a data-source node you do not need to have Hadoop
on that node. The Chukwa agent contains Hadoop jars in its run-time
distribution, and those will be used by the agent, but none of the Hadoop
daemons are needed on that node.

CC chukwa-users@hadoop.apache.org list, where this discussion should
probably move to if there are follow-up Chukwa questions.


On Fri, Jul 9, 2010 at 8:33 AM, Blargy <zmanods@hotmail.com> wrote:

> I am looking into to Chukwa to collect/aggregate our search logs from
> across
> multiple hosts. As I understand it I need to have a agent/adaptor running
> on
> each host which then in turn forward this to a collector (across the
> network) which will then write out to HDFS. Correct?
> Does Hadoop need to be installed on the host machines that are running the
> agent/adaptors or just Chuckwa? Is Hadoop by itself anything or is Hadoop
> just a collection of tools... HDFS, Hive, Chukwa, Mahout, etc?
> Thanks
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Chukwa-questions-tp954643p954643.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message