Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E078BD044 for ; Sun, 4 Nov 2012 03:13:53 +0000 (UTC) Received: (qmail 23220 invoked by uid 500); 4 Nov 2012 03:13:48 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 23087 invoked by uid 500); 4 Nov 2012 03:13:45 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 23039 invoked by uid 99); 4 Nov 2012 03:13:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Nov 2012 03:13:43 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_IMAGE_ONLY_32,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of iozkaymak@gmail.com designates 209.85.220.176 as permitted sender) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Nov 2012 03:13:38 +0000 Received: by mail-vc0-f176.google.com with SMTP id fo13so627991vcb.35 for ; Sat, 03 Nov 2012 20:13:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Mchl/dIyDh0ZCtWE9CKxTcKQyGZHWGzhV2qnQLB7OKw=; b=vxiZh50L3VuLxSW7nnW+IZOj3bYPCsjB2wknZgcn0sEWZw0K4pKDXjbHkfTBZ9Q3bZ 1xkExGr74aod7M1lQy/PnQr2/Qk4Xa7DCWKbzsHCZEavtRAzaGM9neBpP94wuslSbYuA +i3kgB8qEDf6FknfnY+GGkE4CYccvS55H0jqGegxWgnSD4MS5zmRpr0EXolizZJRS4X4 EsSnphphPOFFkPCzQPuVeIZfU5cTJRCe9BvuqP2gvmdEJYcl4Ci0Sbp7WAsBRIi9muoN jc6rI9ckv6w7Zwhl0zqgpRCHEQmB1vjpC3BuRIA7VxxCeaaESYfmwjKaxeE8BAyuqyh2 Z/KA== MIME-Version: 1.0 Received: by 10.52.71.226 with SMTP id y2mr5186525vdu.74.1351998797204; Sat, 03 Nov 2012 20:13:17 -0700 (PDT) Received: by 10.58.19.228 with HTTP; Sat, 3 Nov 2012 20:13:17 -0700 (PDT) Received: by 10.58.19.228 with HTTP; Sat, 3 Nov 2012 20:13:17 -0700 (PDT) In-Reply-To: References: Date: Sat, 3 Nov 2012 22:13:17 -0500 Message-ID: Subject: Re: monitoring CPU cores (resource consumption) in hadoop From: Ilker Ozkaymak To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf307f35ba0b4d4c04cda2c000 X-Virus-Checked: Checked by ClamAV on apache.org --20cf307f35ba0b4d4c04cda2c000 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Look into Zabbix. Open source and should do the trick. On Nov 3, 2012 9:31 PM, "ugiwgh" wrote: > Hi Jim, > You can get the Paramon from the following link > http://www.paratera.com/news_show.asp?id=3D1682&channel=3D17&classid=3D18= . > It's free for Basic version. > You can get help about this software from me. > > --GHui > ** > > ------------------ Original ------------------ > *From: * "Marcos Ortiz"; > *Date: * Sun, Nov 4, 2012 05:39 AM > *To: * "jim.neofotistos"; ** > *Cc: * "user"; "ugiwgh"; ** > *Subject: * Re: monitoring CPU cores (resource consumption) in hadoop > > Regards, Jim. > In the open source world I don't know. > In the Enterprise world, Boundary is a great choice. > Look here: > http://boundary.com/why-boundary/product/ > > On 11/03/2012 02:59 PM, ugiwgh wrote: > > The Paramon can resove this problem. It can monitoring CPU cores. > > --GHui > > ------------------ Original ------------------ > From: "Jim Neofotistos" ; > Date: Sun, Nov 4, 2012 03:00 AM > To: "user" ; > > Subject: monitoring CPU cores (resource consumption) in hadoop > > Standard hadoop monitoring metrics system doesn=92t allow the monitoring= CPU cores. Ganglia open source monitoring does not have the capability w= ith the RRD tool as well. > > Top is an option but I was looking for something cluster wide > > JIm > > > -- > ** > > Marcos Luis Ort=EDz Valmaseda > about.me/marcosortiz > @marcosluis2186 > ** > > > ** > --20cf307f35ba0b4d4c04cda2c000 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable

Look into Zabbix. Open source and should do the trick.

On Nov 3, 2012 9:31 PM, "ugiwgh" <<= a href=3D"mailto:ugiwgh@gmail.com">ugiwgh@gmail.com> wrote:
Hi Jim,
You can get the Paramon from the following link http://www.paratera.com/news_show.asp?id=3D1682&ch= annel=3D17&classid=3D18.
It's free for Basic version.
You can get help about this software fr= om me.

--GHui

------------------=A0Origina= l=A0------------------
From: = =A0"Marcos Ortiz"<mlortiz@uci.cu>;
Date: =A0Sun, Nov 4, 2= 012 05:39 AM
To: =A0"jim.neofotistos"<jim.neofotistos@oracle.com>; =
Cc: =A0"user"<user@hadoop.apache.org>; "= ugiwgh"<ugiwg= h@gmail.com>;
Subject: =A0Re: monitoring CPU cores (resource consumption) in = hadoop

=20 =20 =20 =20 Regards, Jim.
In the open source world I don't know.
In the Enterprise world, Boundary is a great choice.
Look here:
http://boundary.com/why-boundary/product/

On 11/03/2012 02:59 PM, ugiwgh wrote:
The Paramon can resove this problem. It can monitoring CPU cores=
.

--GHui

------------------ Original ------------------
From:  "Jim Neofotistos"<jim.neofotistos@oracle.com>;
Date:  Sun, Nov 4, 2012 03:00 AM
To:  "user"<user@hadoop.apache.org>;=20

Subject:  monitoring CPU cores (resource consumption) in hadoop

Standard hadoop monitoring metrics system  doesn=92t allow the monitoring  =
CPU cores.  Ganglia open source monitoring does not have the capability wit=
h the RRD tool as well.
=20
Top is an option but I was looking for something cluster wide=20
=20
JIm

--
=20 =20

Marcos Luis Ort=EDz Valmaseda
about.m= e/marcosortiz
@= marcosluis2186

=20

--20cf307f35ba0b4d4c04cda2c000--