hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patai Sangbutsarakum <silvianhad...@gmail.com>
Subject Re: collecting CPU, mem, iops of hadoop jobs
Date Tue, 20 Dec 2011 21:11:02 GMT
Thanks for reply, but I don't think metric exposed to Ganglia would be
what i am really looking for..

what i am looking for is some kind of these (but not limit to)

Job_xxxx_yyyy
CPU time: 10204 sec.   <--aggregate from all tasknodes
IOPS: 2344  <-- aggregated from all datanode
MEM: 30G   <-- aggregated

etc,

Job_aaa_bbb
CPU time:
IOPS:
MEM:

Sorry for ambiguous question.
Thanks

On Tue, Dec 20, 2011 at 12:47 PM, He Chen <airbots@gmail.com> wrote:
> You may need Ganglia. It is a cluster monitoring software.
>
> On Tue, Dec 20, 2011 at 2:44 PM, Patai Sangbutsarakum <
> silvianhadoop@gmail.com> wrote:
>
>> Hi Hadoopers,
>>
>> We're running Hadoop 0.20 CentOS5.5. I am finding the way to collect
>> CPU time, memory usage, IOPS of each hadoop Job.
>> What would be the good starting point ? document ? api ?
>>
>> Thanks in advance
>> -P
>>

Mime
View raw message