incubator-chukwa-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Yang <eric...@gmail.com>
Subject Re: Data process for HICC
Date Sun, 07 Nov 2010 18:02:20 GMT
Did you copy hadoop-metrics.properties.template to
hadoop/conf/hadoop-metrics.properties?  You also need to copy
chukwa-hadoop-0.4.0-client.jar and json.jar to hadoop/lib for this to
work.

It looks like your check point file is out of sync with the hash map
which kept track of the files in chukwa-hadoop client.  You might need
to shut down chukwa agent and hdfs.  Remove all check point files from
chukwa/var, and restart chukwa agent then restart hadoop.

regards,
Eric

On Sun, Nov 7, 2010 at 2:42 AM, ZJL <zhu121972@163.com> wrote:
> Hi eric:
>    Thank you for your instruction,I also hope the new release of Chukwa will come soon,
but I still have some questions in my chukwa deployment.
>  1.in my chukwa system,the DFS metric cannot be scrapped, for example: DFS FS Name System
Metircs,DFS Name Node Metrics etc.
>  2. "Error initializing ChukwaClient with list of currentlyregistered adaptors, clearing
our local list of adaptors" in log,do you know what deployment cause this problem.
>
> -----Original Message-----
> From: chukwa-user-return-579-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-579-zhu121972=163.com@incubator.apache.org]
On Behalf Of Eric Yang
> Sent: 2010年11月6日 7:19
> To: chukwa-user@incubator.apache.org
> Subject: Re: Data process for HICC
>
> 1. For system metrics, it is likely the output of sar and iostat do
> not match of what Chukwa expects.  I found system utilities output to
> be highly unreliable for scrapping.  Hence, in Chukwa trunk, I have
> moved to Sigar for collecting system metrics.  This should improve the
> problem that you were seeing.  Your original question is about node
> activity, and HDFS heatmap.  Those metrics are not populated
> automatically.  For node activity, Chukwa was based on Torque's
> pbsnodes.  This is no longer a maintained path.  For HDFS heatmap, you
> need to have hdfs client trace and mr client trace log files stream
> through Chukwa in order to generate graph for those metrics.  There is
> no aggregation script to down sample the data for hdfs heatmap,
> therefore only the last 6 hours is visible, if client trace log files
> are processed by Chukwa.  There is a lot of work to change aggregation
> from SQL to Pig+HBase.  However, most of the work is waiting for Pig
> 0.8 to be release in order for Chukwa to start the implementation.
> Therefore, you might need to wait for a while for the features to
> appear.
>
> 2.  hourlyRolling and dailyRolling should run automatically after
> starting with start-all.sh script.
>
> regards,
> Eric
>
> On Fri, Nov 5, 2010 at 4:24 AM, ZJL <zhu121972@163.com> wrote:
>> HI eric:
>> 1.In background,I have started dbAdmin and the dbAdmin.sh was running,otherwise the
dbbase would have nothing.in my database ,some of field record have no data. not all. "System
metrics collection may fail or be incomplete if your versions of sar and iostat do not match
the ones that Chukwa expects" this citation come
>> from chukwa releasenotes, i suspect if my sysstat version of ubuntu is not match
for chukwa, if so, what can i do for that.
>> 2.i don't know if hourlyRolling or dailyRolling automatically run,after starting
bin/start-all.sh
>>
>> -----Original Message-----
>> From: chukwa-user-return-576-zhu121972=163.com@incubator.apache.org [mailto:chukwa-user-return-576-zhu121972=163.com@incubator.apache.org]
On Behalf Of Eric Yang
>> Sent: 2010年11月5日 8:39
>> To: chukwa-user@incubator.apache.org
>> Subject: Re: Data process for HICC
>>
>> Hi,
>>
>> This may be caused by dbAdmin.sh was not running in the background.
>> In Chukwa 0.4, you need to have dbAdmin.sh periodically create table
>> partitions from the template tables.  If the script is not running,
>> the data might not get loaded.
>>
>> I am not sure about your question about hourlyRolling or dailyRolling.
>>  Those processes should be handled by data processor (./bin/chukwa
>> dp).
>>
>> regards,
>> Eric
>>
>> 2010/11/2 良人 <zhu121972@163.com>:
>>>
>>>  HI :    I always would like to use  chukwa to analyze the hadoop of
>>> efficiency,but I  ran into several problems.
>>>     firstly,i set up chukwa strictly following the instruction .my hicc work
>>> normally and can display graph if there are some data in mysql for instance:
>>> DFS Throughput Metrics,DFS Data Node Metrics,Cluster Metrics by Percentage.
>>>     but some field record in mysql were not in mysql and they can not
>>> display in hicc, for example: DFS Name Node Metrics,DFS FS ,Name System
>>> Metrics,Map/Reduce Metircs,HDFS Heathmap(),Hadoop Activity,Event viewer,
>>> Node Activity Graph.
>>>   my configure:
>>>   chukwa-hadoop-0.4.0-client.jar  have been in the hadoop’s lib
>>>   both hadoop-metrics.properties and hadoop log4j.properties were in hadoop
>>> of conf,i have list these documents in attachment.
>>>   "System metrics collection may fail or be incomplete if your versions of
>>> sar and iostat do not match the ones that Chukwa expects" this citation come
>>> from chukwa releasenotes, i suspect  if my sysstat version of ubuntu is not
>>> match for chukwa, if so, what can i do for that.
>>>   could anybody give me some suggestions, thank you very much,
>>>   by the way do anybody know how to start hourlyRolling and dailyRoilling in
>>> 0.4.0 version and "Error initializing ChukwaClient with list of currently
>>> registered adaptors, clearing our local list of adaptors" in logs,how can i
>>> resolved it .
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> __________ Information from ESET NOD32 Antivirus, version of virus signature database
5592 (20101104) __________
>>
>> The message was checked by ESET NOD32 Antivirus.
>>
>> http://www.eset.com
>>
>>
>>
>>
>>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature database
5592 (20101104) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>
>
>

Mime
View raw message