chukwa-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ahmed Fathalla <afatha...@gmail.com>
Subject Re: Error while starting the collector
Date Mon, 14 Nov 2011 11:50:18 GMT
I think the problem you have is in this line

   <name>writer.hdfs.filesystem</name>
   <value>hdfs://localhost:9999/</value>
   <description>HDFS to dump to</description>
 </property>


Are you sure you've got HDFS running on port 9999 on your local machine?

On Mon, Nov 14, 2011 at 1:18 PM, Mohammad Tariq <dontariq@gmail.com> wrote:

> Whenever I am trying to start the collector using " bin/chukwa
> collector " I get the following line on the terminal and the terminal
> gets stuck there itself -
>
> tariq@ubuntu:~/chukwa-0.4.0$ bin/chukwa collector
> tariq@ubuntu:~/chukwa-0.4.0$ 2011-11-14 16:36:28.888::INFO:  Logging
> to STDERR via org.mortbay.log.StdErrLog
> 2011-11-14 16:36:28.911::INFO:  jetty-6.1.11
>
>
> And this is the content of my collector.log file -
>
> 2011-11-14 16:36:27,955 INFO main ChukwaConfiguration - chukwaConf is
> /home/tariq/chukwa-0.4.0/bin/../conf
> 2011-11-14 16:36:28,096 INFO main root - initing servletCollector
> 2011-11-14 16:36:28,098 INFO main PipelineStageWriter - using
> pipelined writers, pipe length is 2
> 2011-11-14 16:36:28,100 INFO Thread-6 SocketTeeWriter - listen thread
> started
> 2011-11-14 16:36:28,102 INFO main SeqFileWriter - rotateInterval is 300000
> 2011-11-14 16:36:28,102 INFO main SeqFileWriter - outputDir is /chukwa
> 2011-11-14 16:36:28,102 INFO main SeqFileWriter - fsname is
> hdfs://localhost:9999/
> 2011-11-14 16:36:28,102 INFO main SeqFileWriter - filesystem type from
> core-default.xml is org.apache.hadoop.hdfs.DistributedFileSystem
> 2011-11-14 16:36:28,196 ERROR main SeqFileWriter - can't connect to
> HDFS, trying default file system instead (likely to be local)
> java.lang.NoClassDefFoundError:
> org/apache/commons/configuration/Configuration
>        at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
>        at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
>        at
> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
>        at
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
>        at
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>        at
> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>        at
> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
>        at
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
>        at
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>        at
> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>        at
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
>        at
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
>        at
> org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1418)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1319)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
>        at
> org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter.init(SeqFileWriter.java:123)
>        at
> org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter.init(PipelineStageWriter.java:88)
>        at
> org.apache.hadoop.chukwa.datacollection.collector.servlet.ServletCollector.init(ServletCollector.java:112)
>        at
> org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:433)
>        at
> org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:256)
>        at
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>        at
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:616)
>        at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>        at
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:513)
>        at
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>        at
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>        at org.mortbay.jetty.Server.doStart(Server.java:222)
>        at
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>        at
> org.apache.hadoop.chukwa.datacollection.collector.CollectorStub.main(CollectorStub.java:121)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.commons.configuration.Configuration
>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>        ... 29 more
>
> Could anyone point out the issue if possible??? Although, I am able to
> start the agent using " bin/chukwa agent "..I am using Chukwa(0.4.0)
> on a single machine..The chukwa-collector-conf.xml file looks like
> this -
>
> <configuration>
>
>  <property>
>    <name>chukwaCollector.writerClass</name>
>
>  <value>org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter</value>
>  </property>
>
>  <property>
>    <name>chukwaCollector.pipeline</name>
>
>  <value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
>  </property>
>
> <!-- LocalWriter parameters
>  <property>
>    <name>chukwaCollector.localOutputDir</name>
>    <value>/tmp/chukwa/dataSink/</value>
>    <description>Chukwa local data sink directory, see
> LocalWriter.java</description>
>  </property>
>
>  <property>
>    <name>chukwaCollector.writerClass</name>
>
>  <value>org.apache.hadoop.chukwa.datacollection.writer.localfs.LocalWriter</value>
>    <description>Local chukwa writer, see LocalWriter.java</description>
>  </property>
> -->
>
>  <property>
>    <name>writer.hdfs.filesystem</name>
>    <value>hdfs://localhost:9999/</value>
>    <description>HDFS to dump to</description>
>  </property>
>
>  <property>
>    <name>chukwaCollector.outputDir</name>
>    <value>/chukwa/logs/</value>
>    <description>Chukwa data sink directory</description>
>  </property>
>
>  <property>
>    <name>chukwaCollector.rotateInterval</name>
>    <value>300000</value>
>    <description>Chukwa rotate interval (ms)</description>
>  </property>
>
>  <property>
>    <name>chukwaCollector.http.port</name>
>    <value>8080</value>
>    <description>The HTTP port number the collector will listen
> on</description>
>  </property>
>
> </configuration>
>
> And both the "collectors" and "agents" files have only one line i.e
> "localhost"
>
> Many thanks in advance
> Regards,
>     Mohammad Tariq
>



-- 
Ahmed Fathalla

Mime
View raw message