incubator-chukwa-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From TARIQ <donta...@gmail.com>
Subject Re: Error while starting the collector
Date Mon, 14 Nov 2011 13:55:05 GMT
Yes, all the daemons are running and I am able to move the data as well.

Regards,
    Mohammad Tariq



On Mon, Nov 14, 2011 at 7:13 PM, Ahmed Fathalla [via Apache Chukwa]
<ml-node+s679492n3506856h40@n3.nabble.com> wrote:
> Are you sure you started HDFS already? Is the namenode, datanode and
> tasktraker all started. Can you store/read files from HDFS before starting
> chukwa?
>
> On Mon, Nov 14, 2011 at 3:26 PM, Mohammad Tariq <[hidden email]> wrote:
>>
>> One more strange thing which I have noticed is that if am removing
>> "initial_adaptors", I am able to start the agent. But if the
>> "initial_adaptors" file is present inside "conf", I am getting
>> following errors -
>> tariq@ubuntu:~/chukwa-0.4.0$ bin/chukwa agent
>> tariq@ubuntu:~/chukwa-0.4.0$ java.io.IOException: Cannot run program
>> "/usr/bin/sar": java.io.IOException: error=2, No such file or
>> directory
>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>        at java.lang.Runtime.exec(Runtime.java:593)
>>        at java.lang.Runtime.exec(Runtime.java:431)
>>        at java.lang.Runtime.exec(Runtime.java:328)
>>        at
>> org.apache.hadoop.chukwa.inputtools.plugin.ExecPlugin.execute(ExecPlugin.java:66)
>>        at
>> org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:68)
>>        at java.util.TimerThread.mainLoop(Timer.java:512)
>>        at java.util.TimerThread.run(Timer.java:462)
>> Caused by: java.io.IOException: java.io.IOException: error=2, No such
>> file or directory
>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>        ... 7 more
>> java.io.IOException: Cannot run program "/usr/bin/iostat":
>> java.io.IOException: error=2, No such file or directory
>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>        at java.lang.Runtime.exec(Runtime.java:593)
>>        at java.lang.Runtime.exec(Runtime.java:431)
>>        at java.lang.Runtime.exec(Runtime.java:328)
>>        at
>> org.apache.hadoop.chukwa.inputtools.plugin.ExecPlugin.execute(ExecPlugin.java:66)
>>        at
>> org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:68)
>>        at java.util.TimerThread.mainLoop(Timer.java:512)
>>        at java.util.TimerThread.run(Timer.java:462)
>> Caused by: java.io.IOException: java.io.IOException: error=2, No such
>> file or directory
>>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>>        ... 7 more
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>>
>> On Mon, Nov 14, 2011 at 6:49 PM, TARIQ <[hidden email]> wrote:
>> > Hello Ahmed,
>> >    Thanks for your valuable reply. Actually, earlier it was
>> > hdfs://localhost:9000...but it was not working so I made it 9999..But
>> > 9999 is also not working..Here is my core-site.xml file -
>> > <configuration>
>> >      <property>
>> >          <name>dfs.replication</name>
>> >          <value>1</value>
>> >      </property>
>> >
>> >       <property>
>> >          <name>dfs.data.dir</name>
>> >          <value>/home/tariq/hdfs/data</value>
>> >      </property>
>> >
>> >      <property>
>> >          <name>dfs.name.dir</name>
>> >          <value>/home/tariq/hdfs/name</value>
>> >      </property>
>> > </configuration>
>> >
>> > And hdfs-site.xml -
>> > <configuration>
>> >    <property>
>> >          <name>fs.default.name</name>
>> >          <value>hdfs://localhost:9000</value>
>> >    </property>
>> >    <property>
>> >   <name>hadoop.tmp.dir</name>
>> >   <value>file:///home/tariq/hadoop_tmp</value>
>> >    </property>
>> > </configuration>
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> >
>> > On Mon, Nov 14, 2011 at 5:21 PM, Ahmed Fathalla [via Apache Chukwa]
>> > <[hidden email]> wrote:
>> >> I think the problem you have is in this line
>> >>    <name>writer.hdfs.filesystem</name>
>> >>    <value>hdfs://localhost:9999/</value>
>> >>    <description>HDFS to dump to</description>
>> >>  </property>
>> >>
>> >>
>> >> Are you sure you've got HDFS running on port 9999 on your local
>> >> machine?
>> >> On Mon, Nov 14, 2011 at 1:18 PM, Mohammad Tariq <[hidden email]> wrote:
>> >>>
>> >>> Whenever I am trying to start the collector using " bin/chukwa
>> >>> collector " I get the following line on the terminal and the terminal
>> >>> gets stuck there itself -
>> >>>
>> >>> tariq@ubuntu:~/chukwa-0.4.0$ bin/chukwa collector
>> >>> tariq@ubuntu:~/chukwa-0.4.0$ <a href="tel:2011-11-14%2016"
>> >>> value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416"><a href="tel:2011-11-14%2016"
>> >>> value="+442011111416">2011-11-14 16:36:28.888::INFO:  Logging
>> >>> to STDERR via org.mortbay.log.StdErrLog
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28.911::INFO:  jetty-6.1.11
>> >>>
>> >>>
>> >>> And this is the content of my collector.log file -
>> >>>
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:27,955 INFO main ChukwaConfiguration - chukwaConf is
>> >>> /home/tariq/chukwa-0.4.0/bin/../conf
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,096 INFO main root - initing servletCollector
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,098 INFO main PipelineStageWriter - using
>> >>> pipelined writers, pipe length is 2
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,100 INFO Thread-6 SocketTeeWriter - listen thread started
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,102 INFO main SeqFileWriter - rotateInterval is 300000
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,102 INFO main SeqFileWriter - outputDir is /chukwa
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,102 INFO main SeqFileWriter - fsname is
>> >>> hdfs://localhost:9999/
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,102 INFO main SeqFileWriter - filesystem type from
>> >>> core-default.xml is org.apache.hadoop.hdfs.DistributedFileSystem
>> >>> <a href="tel:2011-11-14%2016" value="<a href="tel:%2B442011111416"
>> >>> value="+442011111416">+442011111416">2011-11-14
>> >>> 16:36:28,196 ERROR main SeqFileWriter - can't connect to
>> >>> HDFS, trying default file system instead (likely to be local)
>> >>> java.lang.NoClassDefFoundError:
>> >>> org/apache/commons/configuration/Configuration
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>> >>>        at
>> >>> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
>> >>>        at
>> >>> org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1418)
>> >>>        at
>> >>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1319)
>> >>>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter.init(SeqFileWriter.java:123)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter.init(PipelineStageWriter.java:88)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.chukwa.datacollection.collector.servlet.ServletCollector.init(ServletCollector.java:112)
>> >>>        at
>> >>>
>> >>>
>> >>> org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:433)
>> >>>        at
>> >>>
>> >>> org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:256)
>> >>>        at
>> >>>
>> >>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>> >>>        at
>> >>>
>> >>>
>> >>> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:616)
>> >>>        at
>> >>> org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>> >>>        at
>> >>>
>> >>> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:513)
>> >>>        at
>> >>>
>> >>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>> >>>        at
>> >>>
>> >>> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>> >>>        at org.mortbay.jetty.Server.doStart(Server.java:222)
>> >>>        at
>> >>>
>> >>> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39)
>> >>>        at
>> >>>
>> >>>
>> >>> org.apache.hadoop.chukwa.datacollection.collector.CollectorStub.main(CollectorStub.java:121)
>> >>> Caused by: java.lang.ClassNotFoundException:
>> >>> org.apache.commons.configuration.Configuration
>> >>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>> >>>        at java.security.AccessController.doPrivileged(Native Method)
>> >>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>> >>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>> >>>        at
>> >>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> >>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>> >>>        ... 29 more
>> >>>
>> >>> Could anyone point out the issue if possible??? Although, I am able
to
>> >>> start the agent using " bin/chukwa agent "..I am using Chukwa(0.4.0)
>> >>> on a single machine..The chukwa-collector-conf.xml file looks like
>> >>> this -
>> >>>
>> >>> <configuration>
>> >>>
>> >>>  <property>
>> >>>    <name>chukwaCollector.writerClass</name>
>> >>>
>> >>>
>> >>>
>> >>>  <value>org.apache.hadoop.chukwa.datacollection.writer.PipelineStageWriter</value>
>> >>>  </property>
>> >>>
>> >>>  <property>
>> >>>    <name>chukwaCollector.pipeline</name>
>> >>>
>> >>>
>> >>>
>> >>>  <value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
>> >>>  </property>
>> >>>
>> >>> <!-- LocalWriter parameters
>> >>>  <property>
>> >>>    <name>chukwaCollector.localOutputDir</name>
>> >>>    <value>/tmp/chukwa/dataSink/</value>
>> >>>    <description>Chukwa local data sink directory, see
>> >>> LocalWriter.java</description>
>> >>>  </property>
>> >>>
>> >>>  <property>
>> >>>    <name>chukwaCollector.writerClass</name>
>> >>>
>> >>>
>> >>>
>> >>>  <value>org.apache.hadoop.chukwa.datacollection.writer.localfs.LocalWriter</value>
>> >>>    <description>Local chukwa writer, see
>> >>> LocalWriter.java</description>
>> >>>  </property>
>> >>> -->
>> >>>
>> >>>  <property>
>> >>>    <name>writer.hdfs.filesystem</name>
>> >>>    <value>hdfs://localhost:9999/</value>
>> >>>    <description>HDFS to dump to</description>
>> >>>  </property>
>> >>>
>> >>>  <property>
>> >>>    <name>chukwaCollector.outputDir</name>
>> >>>    <value>/chukwa/logs/</value>
>> >>>    <description>Chukwa data sink directory</description>
>> >>>  </property>
>> >>>
>> >>>  <property>
>> >>>    <name>chukwaCollector.rotateInterval</name>
>> >>>    <value>300000</value>
>> >>>    <description>Chukwa rotate interval (ms)</description>
>> >>>  </property>
>> >>>
>> >>>  <property>
>> >>>    <name>chukwaCollector.http.port</name>
>> >>>    <value>8080</value>
>> >>>    <description>The HTTP port number the collector will listen
>> >>> on</description>
>> >>>  </property>
>> >>>
>> >>> </configuration>
>> >>>
>> >>> And both the "collectors" and "agents" files have only one line i.e
>> >>> "localhost"
>> >>>
>> >>> Many thanks in advance
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>
>> >>
>> >>
>> >> --
>> >> Ahmed Fathalla
>> >>
>> >>
>> >> ________________________________
>> >> If you reply to this email, your message will be added to the
>> >> discussion
>> >> below:
>> >>
>> >>
>> >> http://apache-chukwa.679492.n3.nabble.com/Error-while-starting-the-collector-tp3506534p3506606.html
>> >> To unsubscribe from Apache Chukwa, click here.
>> >> See how NAML generates this email
>> >
>> > ________________________________
>> > View this message in context: Re: Error while starting the collector
>> > Sent from the Chukwa - Users mailing list archive at Nabble.com.
>> >
>
>
>
> --
> Ahmed Fathalla
>
>
> ________________________________
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-chukwa.679492.n3.nabble.com/Error-while-starting-the-collector-tp3506534p3506856.html
> To unsubscribe from Apache Chukwa, click here.
> See how NAML generates this email


--
View this message in context: http://apache-chukwa.679492.n3.nabble.com/Error-while-starting-the-collector-tp3506534p3506888.html
Sent from the Chukwa - Users mailing list archive at Nabble.com.
Mime
View raw message