incubator-chukwa-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Yang <eric...@gmail.com>
Subject Re: How to begin with Chukwa
Date Fri, 11 Nov 2011 16:45:06 GMT
 It looks like a bug in pid file management.  run $JAVA_HOME/bin/jps, then kill the running
CollectorStub. 

regards,
Eric

On Nov 11, 2011, at 5:15 AM, Mohammad Tariq wrote:

> I strange thing I have noticed is that when I issued " bin/chukwa
> collector " I got this message - " $ 2011-11-11 18:40:36.367::INFO:
> Logging to STDERR via org.mortbay.log.StdErrLog
> 2011-11-11 18:40:36.391::INFO:  jetty-6.1.11 "
> 
> But then I tried " bin/start-collectors.sh " I got this -
> localhost: starting collector, logging to
> /tmp/chukwa/log/chukwa-chukwa-collector-ubuntu.out
> localhost: collector running as process 4570. Stop it first.
> localhost: 2011-11-11 16:51:41.485::INFO:  Logging to STDERR via
> org.mortbay.log.StdErrLog
> localhost: 2011-11-11 16:51:41.508::INFO:  jetty-6.1.11
> 
> This means that collector is already running..but when tried " kill
> 4570 " it showed this -
> bash: kill: (4570) - No such process...I am completely
> clueless..Please help me out
> 
> Regards,
>     Mohammad Tariq
> 
> 
> 
> On Fri, Nov 11, 2011 at 6:33 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
>> I removed everything and started again from the scratch :
>> 
>> 1 - First of all I modified HADOOP_HOME and HADOOP_CONF_DIR -
>> export HADOOP_HOME="/home/tariq/hadoop-0.20.203.0"
>> export HADOOP_CONF_DIR="/home/tariq/hadoop-0.20.203.0"
>> 
>> 2 - My collectors and agents files have a single line i. e -
>> localhost
>> 
>> 3 - This is the content of my chukwa-collector-conf.xml file -
>>  <property>
>>    <name>writer.hdfs.filesystem</name>
>>    <value>hdfs://localhost:9000/</value>
>>    <description>HDFS to dump to</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaCollector.outputDir</name>
>>    <value>/chukwa/logs/</value>
>>    <description>Chukwa data sink directory</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaCollector.rotateInterval</name>
>>    <value>300000</value>
>>    <description>Chukwa rotate interval (ms)</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaCollector.http.port</name>
>>    <value>8080</value>
>>    <description>The HTTP port number the collector will listen on</description>
>>  </property>
>> 
>> 4 - This is  the initial_adaptors file -
>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor Iostat
>> 60 /usr/bin/iostat -x -k 55 2 0
>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor Df 60
>> /bin/df -l 0
>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor Sar 60
>> /usr/bin/sar -q -r -n ALL 55 0
>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor Top 60
>> /usr/bin/top -b -n 1 -c 0
>> 
>> 5 - This is my chukwa-agent-conf.xml file -
>> 
>>  <property>
>>    <name>chukwaAgent.tags</name>
>>    <value>cluster="chukwa"</value>
>>    <description>The cluster's name for this agent</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.control.port</name>
>>    <value>9093</value>
>>    <description>The socket port number the agent's control interface
>> can be contacted at.</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.hostname</name>
>>    <value>localhost</value>
>>    <description>The hostname of the agent on this node. Usually
>> localhost, this is used by the chukwa instrumentation agent-control
>> interface library</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.checkpoint.name</name>
>>    <value>chukwa_agent_checkpoint</value>
>>    <description>the prefix to to prepend to the agent's checkpoint
>> file(s)</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.checkpoint.dir</name>
>>    <value>${CHUKWA_LOG_DIR}/</value>
>>    <description>the location to put the agent's checkpoint
>> file(s)</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.checkpoint.interval</name>
>>    <value>5000</value>
>>    <description>the frequency interval for the agent to do
>> checkpoints, in milliseconds</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.sender.fastRetries</name>
>>    <value>4</value>
>>    <description>the number of post attempts to make to a single
>> collector, before marking it failed</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.collector.retries</name>
>>    <value>144000</value>
>>    <description>the number of attempts to find a working
>> collector</description>
>>  </property>
>> 
>>  <property>
>>    <name>chukwaAgent.collector.retryInterval</name>
>>    <value>20000</value>
>>    <description>the number of milliseconds to wait between searches
>> for a collector</description>
>>  </property>
>> 
>> Regards,
>>     Mohammad Tariq
>> 
>> 
>> 
>> On Fri, Nov 11, 2011 at 12:56 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
>>> Hi bill,
>>>  I have edited my collectors file..it contains following two lines now -
>>> 
>>> localhost
>>> http://localhost:9999/
>>> 
>>> Regards,
>>>     Mohammad Tariq
>>> 
>>> 
>>> 
>>> On Fri, Nov 11, 2011 at 3:38 AM, Bill Graham <billgraham@gmail.com> wrote:
>>>> Unfortunately, the conf/collectors config file is used in different ways
>>>> (with different expected formats) by the agent and the collector. See this
>>>> discussion:
>>>> http://lucene.472066.n3.nabble.com/Chukwa-setup-issues-tt2764763.html#a2765143
>>>> If you try to run both processes frmo the same configs, you'll run into
>>>> issues.
>>>> 
>>>> On Thu, Nov 10, 2011 at 1:51 PM, TARIQ <dontariq@gmail.com> wrote:
>>>>> 
>>>>> Here is the collector.log file -
>>>>> 
>>>>> 2011-11-11 03:20:37,394 INFO main ChukwaConfiguration - chukwaConf is
>>>>> /home/prashant/chukwa-0.4.0/bin/../conf
>>>>> 2011-11-11 03:20:38,960 INFO main root - initing servletCollector
>>>>> 2011-11-11 03:20:38,967 INFO main PipelineStageWriter - using
>>>>> pipelined writers, pipe length is 2
>>>>> 2011-11-11 03:20:38,972 INFO Thread-6 SocketTeeWriter - listen thread
>>>>> started
>>>>> 2011-11-11 03:20:38,979 INFO main SeqFileWriter - rotateInterval is 300000
>>>>> 2011-11-11 03:20:38,979 INFO main SeqFileWriter - outputDir is
>>>>> /chukwa/logs/
>>>>> 2011-11-11 03:20:38,979 INFO main SeqFileWriter - fsname is
>>>>> hdfs://localhost:9000/
>>>>> 2011-11-11 03:20:38,979 INFO main SeqFileWriter - filesystem type from
>>>>> core-default.xml is org.apache.hadoop.hdfs.DistributedFileSystem
>>>>> 2011-11-11 03:20:39,205 ERROR main SeqFileWriter - can't connect to
>>>>> HDFS, trying default file system instead (likely to be local)
>>>>> java.lang.NoClassDefFoundError:
>>>>> org/apache/commons/configuration/Configuration
>>>>>         at
>>>>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
>>>>>         at
>>>>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
>>>>>         at
>>>>> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>>>>>         at
>>>>> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
>>>>>         at
>>>>> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
>>>>>         at
>>>>> org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1418)
>>>>>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1319)
>>>>>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
>>>>>         at
>>>>> org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter.init(SeqFileWriter.java:123)
>>>>>         at
>>>>> org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter.init(PipelineStageWriter.java:88)
>>>>>         at
>>>>> org.apache.hadoop.chukwa.datacollection.collector.servlet.ServletCollector.init(ServletCollector.java:112)
>>>>>         at
>>>>> org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:433)
>>>>>         at
>>>>> org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:256)
>>>>>         at
>>>>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>>>>>         at
>>>>> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:616)
>>>>>         at
>>>>> org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>>>>>         at
>>>>> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:513)
>>>>>         at
>>>>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>>>>>         at
>>>>> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>>>>>         at org.mortbay.jetty.Server.doStart(Server.java:222)
>>>>>         at
>>>>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>>>>>         at
>>>>> org.apache.hadoop.chukwa.datacollection.collector.CollectorStub.main(CollectorStub.java:121)
>>>>> Caused by: java.lang.ClassNotFoundException:
>>>>> org.apache.commons.configuration.Configuration
>>>>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>>>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>>>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>>>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>>>>         ... 29 more
>>>>> 
>>>>> Regards,
>>>>>     Mohammad Tariq
>>>>> 
>>>>> 
>>>>> 
>>>>> On Fri, Nov 11, 2011 at 3:09 AM, AD [via Apache Chukwa]
>>>>> <[hidden email]> wrote:
>>>>>> can you share the collector.log file?
>>>>>> Also are you setting CHUKWA_HOME variable properly ?
>>>>>> the iostat error is a bit concerning.
>>>>>> 
>>>>>> On Thu, Nov 10, 2011 at 2:53 PM, Mohammad Tariq <[hidden email]>
wrote:
>>>>>>> 
>>>>>>> Can you share 'configured' Chukwa configuration files that are
>>>>>>> necessary in order to run Chukwa agent and collector, if you
have any
>>>>>>> and if possible for you so that I can observe them and figure
out what
>>>>>>> I am missing???
>>>>>>> 
>>>>>>> Regards,
>>>>>>>     Mohammad Tariq
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Fri, Nov 11, 2011 at 1:16 AM, Mohammad Tariq <[hidden email]>
wrote:
>>>>>>>> This is the output on my terminal when i issue
>>>>>>>> "prashant@ubuntu:~/chukwa-0.4.0$ bin/chukwa agent" -
>>>>>>>> prashant@ubuntu:~/chukwa-0.4.0$ java.io.IOException: Cannot
run
>>>>>>>> program "/usr/bin/iostat": java.io.IOException: error=2,
No such file
>>>>>>>> or directory
>>>>>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>>>>>        at java.lang.Runtime.exec(Runtime.java:593)
>>>>>>>>        at java.lang.Runtime.exec(Runtime.java:431)
>>>>>>>>        at java.lang.Runtime.exec(Runtime.java:328)
>>>>>>>>        at
>>>>>>>> 
>>>>>>>> org.apache.hadoop.chukwa.inputtools.plugin.ExecPlugin.execute(ExecPlugin.java:66)
>>>>>>>>        at
>>>>>>>> 
>>>>>>>> org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:68)
>>>>>>>>        at java.util.TimerThread.mainLoop(Timer.java:512)
>>>>>>>>        at java.util.TimerThread.run(Timer.java:462)
>>>>>>>> Caused by: java.io.IOException: java.io.IOException: error=2,
No such
>>>>>>>> file or directory
>>>>>>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>>>>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>>>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>>>>>        ... 7 more
>>>>>>>> java.io.IOException: Cannot run program "/usr/bin/sar":
>>>>>>>> java.io.IOException: error=2, No such file or directory
>>>>>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>>>>>        at java.lang.Runtime.exec(Runtime.java:593)
>>>>>>>>        at java.lang.Runtime.exec(Runtime.java:431)
>>>>>>>>        at java.lang.Runtime.exec(Runtime.java:328)
>>>>>>>>        at
>>>>>>>> 
>>>>>>>> org.apache.hadoop.chukwa.inputtools.plugin.ExecPlugin.execute(ExecPlugin.java:66)
>>>>>>>>        at
>>>>>>>> 
>>>>>>>> org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:68)
>>>>>>>>        at java.util.TimerThread.mainLoop(Timer.java:512)
>>>>>>>>        at java.util.TimerThread.run(Timer.java:462)
>>>>>>>> Caused by: java.io.IOException: java.io.IOException: error=2,
No such
>>>>>>>> file or directory
>>>>>>>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>>>>>>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>>>>>>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>>>>>>>        ... 7 more
>>>>>>>> 
>>>>>>>> and when i issue "prashant@ubuntu:~/chukwa-0.4.0$ bin/chukwa
>>>>>>>> collector" i get this -
>>>>>>>> prashant@ubuntu:~/chukwa-0.4.0$ 2011-11-11 01:04:25.851::INFO:
>>>>>>>> Logging to STDERR via org.mortbay.log.StdErrLog
>>>>>>>> 2011-11-11 01:04:25.894::INFO:  jetty-6.1.11
>>>>>>>> 
>>>>>>>> Regards,
>>>>>>>>     Mohammad Tariq
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Fri, Nov 11, 2011 at 1:12 AM, Mohammad Tariq <[hidden
email]>
>>>>>>>> wrote:
>>>>>>>>> Hello AD,
>>>>>>>>>  this my initial_adaptors file -
>>>>>>>>> 
>>>>>>>>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor
>>>>>>>>> Iostat
>>>>>>>>> 60 /usr/bin/iostat -x -k 55 2 0
>>>>>>>>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor
Df
>>>>>>>>> 60
>>>>>>>>> /bin/df -l 0
>>>>>>>>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor
Sar
>>>>>>>>> 60
>>>>>>>>> /usr/bin/sar -q -r -n ALL 55 0
>>>>>>>>> add org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor
Top
>>>>>>>>> 60
>>>>>>>>> /usr/bin/top -b -n 1 -c 0
>>>>>>>>> 
>>>>>>>>> and this is the collectors file -
>>>>>>>>> localhost
>>>>>>>>> 
>>>>>>>>> I am trying to run both agent and collector on the same
machine.
>>>>>>>>> 
>>>>>>>>> The collector.log file looks like this -
>>>>>>>>> 2011-11-11 01:04:25,180 INFO main ChukwaConfiguration
- chukwaConf
>>>>>>>>> is
>>>>>>>>> /home/prashant/chukwa-0.4.0/bin/../conf
>>>>>>>>> 2011-11-11 01:04:25,434 INFO main root - initing servletCollector
>>>>>>>>> 2011-11-11 01:04:25,438 INFO main PipelineStageWriter
- using
>>>>>>>>> pipelined writers, pipe length is 2
>>>>>>>>> 2011-11-11 01:04:25,448 INFO main SeqFileWriter - rotateInterval
is
>>>>>>>>> 300000
>>>>>>>>> 2011-11-11 01:04:25,448 INFO main SeqFileWriter - outputDir
is
>>>>>>>>> /chukwa/logs/
>>>>>>>>> 2011-11-11 01:04:25,448 INFO main SeqFileWriter - fsname
is
>>>>>>>>> hdfs://localhost:9000/
>>>>>>>>> 2011-11-11 01:04:25,448 INFO main SeqFileWriter - filesystem
type
>>>>>>>>> from
>>>>>>>>> core-default.xml is org.apache.hadoop.hdfs.DistributedFileSystem
>>>>>>>>> 2011-11-11 01:04:25,455 INFO Thread-6 SocketTeeWriter
- listen
>>>>>>>>> thread
>>>>>>>>> started
>>>>>>>>> 2011-11-11 01:04:25,593 ERROR main SeqFileWriter - can't
connect to
>>>>>>>>> HDFS, trying default file system instead (likely to be
local)
>>>>>>>>> java.lang.NoClassDefFoundError:
>>>>>>>>> org/apache/commons/configuration/Configuration
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1418)
>>>>>>>>>        at
>>>>>>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1319)
>>>>>>>>>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter.init(SeqFileWriter.java:123)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter.init(PipelineStageWriter.java:88)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.chukwa.datacollection.collector.servlet.ServletCollector.init(ServletCollector.java:112)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:433)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:256)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:616)
>>>>>>>>>        at
>>>>>>>>> org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:513)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>>>>>>>>>        at org.mortbay.jetty.Server.doStart(Server.java:222)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>>>>>>>>>        at
>>>>>>>>> 
>>>>>>>>> org.apache.hadoop.chukwa.datacollection.collector.CollectorStub.main(CollectorStub.java:121)
>>>>>>>>> Caused by: java.lang.ClassNotFoundException:
>>>>>>>>> org.apache.commons.configuration.Configuration
>>>>>>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>>>>>>>>        at java.security.AccessController.doPrivileged(Native
Method)
>>>>>>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>>>>>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>>>>>>>        at
>>>>>>>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>>>>>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>>>>>>>>        ... 29 more
>>>>>>>>> 
>>>>>>>>> Regards,
>>>>>>>>>     Mohammad Tariq
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Thu, Nov 10, 2011 at 8:30 PM, AD <[hidden email]>
wrote:
>>>>>>>>>> what happens when you tail the collector.log file?
>>>>>>>>>> can you share your initial_adaptors and collectors
file in conf ?
>>>>>>>>>> 
>>>>>>>>>> On Thu, Nov 10, 2011 at 8:13 AM, Mohammad Tariq <[hidden
email]>
>>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hello AD,
>>>>>>>>>>>   I am sorry for leaving the discussion in between,
I was busy
>>>>>>>>>>> with
>>>>>>>>>>> my exams.Now I am back and need your help.I started
with what you
>>>>>>>>>>> have
>>>>>>>>>>> suggested and read about the concepts (agents
and collectors etc).
>>>>>>>>>>> Now
>>>>>>>>>>> I am trying to collect the data from a file on
the local file
>>>>>>>>>>> system
>>>>>>>>>>> and trying to dump it into the HDFS and I am
running the agent and
>>>>>>>>>>> collector on the same machine.I was able to start
the agent but
>>>>>>>>>>> when
>>>>>>>>>>> I
>>>>>>>>>>> tried to start the collector I got the following
line at the
>>>>>>>>>>> terminal
>>>>>>>>>>> and the terminal got stuck there itself -
>>>>>>>>>>> tariq@ubuntu:~/chukwa-0.4.0$ 2011-11-10 18:42:09.521::INFO:
>>>>>>>>>>>  Logging
>>>>>>>>>>> to STDERR via org.mortbay.log.StdErrLog
>>>>>>>>>>> 2011-11-10 18:42:09.544::INFO:  jetty-6.1.11
>>>>>>>>>>> 
>>>>>>>>>>> When I issued jps, I could see only the agent.
>>>>>>>>>>> 
>>>>>>>>>>> Regards,
>>>>>>>>>>>     Mohammad Tariq
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Nov 5, 2011 at 5:38 AM, Bill Graham <[hidden
email]>
>>>>>>>>>>> wrote:
>>>>>>>>>>>> One difference between Chukwa and Flume is
that in Chukwa you
>>>>>>>>>>>> configure
>>>>>>>>>>>> each
>>>>>>>>>>>> component individually to know how to handle
different
>>>>>>>>>>>> datatypes.
>>>>>>>>>>>> This
>>>>>>>>>>>> makes
>>>>>>>>>>>> it fairly straight-forward to set up, but
can be painful to
>>>>>>>>>>>> manage
>>>>>>>>>>>> for
>>>>>>>>>>>> very
>>>>>>>>>>>> large installs (so I've heard). Flume centralized
the configs of
>>>>>>>>>>>> "flows"
>>>>>>>>>>>> which helps when managing larger installs,
at the cost of
>>>>>>>>>>>> additional
>>>>>>>>>>>> complexity.
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Nov 4, 2011 at 7:02 AM, AD <[hidden
email]> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Are you running the agent and collector
on the same machine?
>>>>>>>>>>>>>  Off
>>>>>>>>>>>>> the
>>>>>>>>>>>>> cuff
>>>>>>>>>>>>> its something like
>>>>>>>>>>>>> 1 - set your CHUKWA_HOME env variable
>>>>>>>>>>>>> 2 - edit conf/chukwa-env.sh and make
sure JAVA_HOME and all the
>>>>>>>>>>>>> vars
>>>>>>>>>>>>> look
>>>>>>>>>>>>> good
>>>>>>>>>>>>> 3 - edit conf/initial_adaptors with the
adaptors you need.
>>>>>>>>>>>>>  note
>>>>>>>>>>>>> here
>>>>>>>>>>>>> that
>>>>>>>>>>>>> the dataType
>>>>>>>>>>>>> 4 - edit conf/collectors and add the
hostname/ip of the
>>>>>>>>>>>>> collectors
>>>>>>>>>>>>> 5 - edit conf/agents and add ip/hostname
of the agents
>>>>>>>>>>>>> 6 - edit conf/chukwa-demux.conf and make
sure you have a demux
>>>>>>>>>>>>> processor
>>>>>>>>>>>>> for your dataType (name from initial_adaptors)
>>>>>>>>>>>>> On agent node run bin/chukwa agent ,
tail
>>>>>>>>>>>>> ${CHUKWA_LOGS}/agent.log
>>>>>>>>>>>>> On collector node run bin/chukwa collector
 ,  tail
>>>>>>>>>>>>> ${CHUKWA_LOGS]/collector.log
>>>>>>>>>>>>> On collector node run bin/chukwa Demux
 ,  tail
>>>>>>>>>>>>> ${CHUKWA_LOGS}/Demux.log
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Nov 4, 2011 at 9:48 AM, Mohammad
Tariq <[hidden email]>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hello AD,
>>>>>>>>>>>>>>   Yes, I have read the wiki and right
now I am trying to set
>>>>>>>>>>>>>> up a
>>>>>>>>>>>>>> Chukwa cluster using the wiki itself.
Are there any other
>>>>>>>>>>>>>> links
>>>>>>>>>>>>>> you
>>>>>>>>>>>>>> are aware of that can help me learn
and use Chukwa in a better
>>>>>>>>>>>>>> fashion?? I am trying to use Chukwa
for data in "The Wellsite
>>>>>>>>>>>>>> Information Transfer Standard Markup
Language (WITSML)"
>>>>>>>>>>>>>> format.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>     Mohammad Tariq
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Nov 4, 2011 at 6:49 PM, AD
<[hidden email]> wrote:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> I think it helps to read he wiki
and understand the basic
>>>>>>>>>>>>>>> concepts,
>>>>>>>>>>>>>>> like agents and collectors. 
What kind of data do you want
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>> parse?
>>>>>>>>>>>>>>>  I have
>>>>>>>>>>>>>>> used both Flume and Chukwa myself
and can say for sure
>>>>>>>>>>>>>>> Chukwa
>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>> miles
>>>>>>>>>>>>>>> easier to work with and configure.
 Flume may have some more
>>>>>>>>>>>>>>> advanced
>>>>>>>>>>>>>>> features, but so far i found
Chukwa both easy to extend from
>>>>>>>>>>>>>>> a
>>>>>>>>>>>>>>> development
>>>>>>>>>>>>>>> standpoint, and much easier to
configure.
>>>>>>>>>>>>>>> Have you read the wiki pages
?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Fri, Nov 4, 2011 at 8:39 AM,
Mohammad Tariq <[hidden
>>>>>>>>>>>>>>> email]>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Hello list,
>>>>>>>>>>>>>>>>   I am totally new to Chukwa
and I have to learn and start
>>>>>>>>>>>>>>>> using it
>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>> soon soon as possible as
a part of my new project. Could
>>>>>>>>>>>>>>>> any
>>>>>>>>>>>>>>>> one
>>>>>>>>>>>>>>>> let me know
>>>>>>>>>>>>>>>> how to begin with?? It would
be a great favor to me. I know
>>>>>>>>>>>>>>>> Flume
>>>>>>>>>>>>>>>> and have
>>>>>>>>>>>>>>>> used it already.Also I would
like to know how different is
>>>>>>>>>>>>>>>> Chukwa
>>>>>>>>>>>>>>>> from
>>>>>>>>>>>>>>>> Flume??
>>>>>>>>>>>>>>>>   Many thanks in advance.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>   Regards,
>>>>>>>>>>>>>>>>     Mohammad Tariq
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> ________________________________
>>>>>> If you reply to this email, your message will be added to the discussion
>>>>>> below:
>>>>>> 
>>>>>> http://apache-chukwa.679492.n3.nabble.com/How-to-begin-with-Chukwa-tp3479982p3498042.html
>>>>>> To unsubscribe from Apache Chukwa, click here.
>>>>>> See how NAML generates this email
>>>>> 
>>>>> ________________________________
>>>>> View this message in context: Re: How to begin with Chukwa
>>>>> Sent from the Chukwa - Users mailing list archive at Nabble.com.
>>>> 
>>>> 
>>> 
>> 


Mime
View raw message