Return-Path: X-Original-To: apmail-ambari-user-archive@www.apache.org Delivered-To: apmail-ambari-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F020318FCB for ; Thu, 7 May 2015 03:34:29 +0000 (UTC) Received: (qmail 10576 invoked by uid 500); 7 May 2015 03:34:29 -0000 Delivered-To: apmail-ambari-user-archive@ambari.apache.org Received: (qmail 10541 invoked by uid 500); 7 May 2015 03:34:29 -0000 Mailing-List: contact user-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ambari.apache.org Delivered-To: mailing list user@ambari.apache.org Received: (qmail 10531 invoked by uid 99); 7 May 2015 03:34:29 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 May 2015 03:34:29 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 3B2D91A2653 for ; Thu, 7 May 2015 03:34:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.139 X-Spam-Level: *** X-Spam-Status: No, score=3.139 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, KAM_LOTSOFHASH=0.25, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=yahoo.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id zloH9OfmMAib for ; Thu, 7 May 2015 03:34:10 +0000 (UTC) Received: from nm49-vm4.bullet.mail.ne1.yahoo.com (nm49-vm4.bullet.mail.ne1.yahoo.com [98.138.121.132]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id CB091201C1 for ; Thu, 7 May 2015 03:34:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1430969580; bh=1FXBJznSvZdLfSGxUz9kz2M0YXn0ugnVbmYyd0Qoa74=; h=Date:From:Reply-To:To:In-Reply-To:References:Subject:From:Subject; b=FxhFQyY8j8PkyUgmJIz+fbM9yslSIIAkgSsH4AI9EUdAwATzXtC0nqzHH1+2jM0fivsNB2mcxHtkveBw0YvoSbIYr+bCi21F1+4cJrwEVo4RKspZLte+AYqAvP0TpXiAk1TuXaTgL1eB0fYRVn2B7HstD0v3opWz3yOgEI4ahbEuTLPGaTzqWG8obqhd9mNcCW1DpvogB6jKyD4cM4NMFJFaCZ+4IbBDYUy2DtQtWLw/iw3Q6jIdRFlIXgirBk8LtfdZk4UuS8yCHR/FiFQfvxepV4axdTq6wo6dZ+cJNgS4gJRUMAQbgbQpCKBSBptwdR4jOBC7XrCBhV9f6jdD9A== Received: from [127.0.0.1] by nm49.bullet.mail.ne1.yahoo.com with NNFMP; 07 May 2015 03:33:00 -0000 Received: from [98.138.101.132] by nm49.bullet.mail.ne1.yahoo.com with NNFMP; 07 May 2015 03:30:10 -0000 Received: from [98.138.88.235] by tm20.bullet.mail.ne1.yahoo.com with NNFMP; 07 May 2015 03:30:10 -0000 Received: from [127.0.0.1] by omp1035.mail.ne1.yahoo.com with NNFMP; 07 May 2015 03:30:10 -0000 X-Yahoo-Newman-Property: ymail-4 X-Yahoo-Newman-Id: 668793.86971.bm@omp1035.mail.ne1.yahoo.com X-YMail-OSG: LE0qrpMVM1kIPNSzulf3lZmdsJkdvi2goZOW8E_EMB4xCwLbiYSEemQTL2.m7Z0 x673FPvumSqNiHt7Qr6cwoIv4j8YNMFG6izutEDuTHYIudRyPJWBysUyCnkzNtz6y7U2ycmQZSuJ aQBe3DIst_GnjwxE3dyX0ebXyuj_ri7Oq86Ty_Eeg.zKejmMDpagH5zlcO8c_gSlc6O21dQdtSf0 8oHI1NhVwSAY4x8dIQqoiMACZ0SnF46IKLWLL4uTu50OIxTPzpImXM9SFuFgySlmIT9m9pYlEs9z Mf7BxoWy4grsapwZtYcf4cnQ1qCZQcD0aYBxYlG1ylEqydSbqEzgzg9YslxwbDEbkQAV0AeliVFn 4uJ_1ehU8bLvurg50Dkm1Jh4bztoG4c5.rCofp0nwmtJKV9pO4_j0Me_RVkAWJQGWejsjkIa4BZx M1Zj1xEKKBk_bLweTS5nKYCPHlY6OoifLOFGEoA_WcH_zl6tKzQbu3X9lnxEROSqhx7P0d36bTRH SjLE06FaZpIo.uPpd Received: by 98.138.105.249; Thu, 07 May 2015 03:30:10 +0000 Date: Thu, 7 May 2015 03:30:09 +0000 (UTC) From: Jayesh Thakrar Reply-To: Jayesh Thakrar To: "user@ambari.apache.org" , Siddharth Wagle , Jayesh Thakrar Message-ID: <163130862.1699408.1430969409735.JavaMail.yahoo@mail.yahoo.com> In-Reply-To: <1695144145.1639425.1430968031740.JavaMail.yahoo@mail.yahoo.com> References: <1695144145.1639425.1430968031740.JavaMail.yahoo@mail.yahoo.com> Subject: Re: Kafka broker metrics not appearing in REST API MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_1699407_882908677.1430969409722" ------=_Part_1699407_882908677.1430969409722 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable More info.... I was doing some "stress-testing" and interestingly, the Metrics Collector = crashed 2 times and I had to restart it (don't like a file-based HBase for = the metrics collector, but not very confident of configuring the system to = point to an existing HBase cluster). Also, after this email thread, I looked up =C2=A0the metrics collector logs= and see errors like this - METRIC_RECORD' at region=3DMETRIC_RECORD,,1429966316307.947cfa22f884d035c09= fe804b1f5402c., hostname=3Ddtord01flm03p.dc.dotomi.net,60455,1429737430103,= seqNum=3D24393013:09:37,619 =C2=A0INFO [phoenix-1-thread-349921] RpcRetryi= ngCaller:129 - Call exception, tries=3D11, retries=3D35, started=3D835564 m= s ago, cancelled=3Dfalse, msg=3Drow 'kafka.network.RequestMetrics.Metadata-= RequestsPerSec.1MinuteRate^@dtord01flm27p.dc.dotomi.net^@^@^@^AL=C3=AF=C2= =BF=C2=BD=C3=AF=C2=BF=C2=BD:=C3=AF=C2=BF=C2=BDkafka_broker' on table 'METRI= C_RECORD' at region=3DMETRIC_RECORD,kafkark.RequestMetrics.Metadata-Request= sPerSec.1MinuteRate\x00dtord01flm27p.dc.dotomi.net\x00\x00\x00\x01L\xED\xED= :\xE5kafka_broker,1429966316307.d488f5e58d54c3251cb81fdfa475dd45., hostname= =3Ddtord01flm03p.dc.dotomi.net,60455,1429737430103, seqNum=3D24393113:10:58= ,082 =C2=A0INFO [phoenix-1-thread-349920] RpcRetryingCaller:129 - Call exce= ption, tries=3D12, retries=3D35, started=3D916027 ms ago, cancelled=3Dfalse= , msg=3Drow '' on table 'METRIC_RECORD' at region=3DMETRIC_RECORD,,14299663= 16307.947cfa22f884d035c09fe804b1f5402c., hostname=3Ddtord01flm03p.dc.dotomi= .net,60455,1429737430103, seqNum=3D24393013:10:58,082 =C2=A0INFO [phoenix-1= -thread-349921] RpcRetryingCaller:129 - Call exception, tries=3D12, retries= =3D35, started=3D916027 ms ago, cancelled=3Dfalse, msg=3Drow 'kafka.network= .RequestMetrics.Metadata-RequestsPerSec.1MinuteRate^@dtord01flm27p.dc.dotom= i.net^@^@^@^AL=C3=AF=C2=BF=C2=BD=C3=AF=C2=BF=C2=BD:=C3=AF=C2=BF=C2=BDkafka_= broker' on table 'METRIC_RECORD' at region=3DMETRIC_RECORD,kafkark.RequestM= etrics.Metadata-RequestsPerSec.1MinuteRate\x00dtord01flm27p.dc.dotomi.net\x= 00\x00\x00\x01L\xED\xED:\xE5kafka_broker,1429966316307.d488f5e58d54c3251cb8= 1fdfa475dd45., hostname=3Ddtord01flm03p.dc.dotomi.net,60455,1429737430103, = seqNum=3D24393113:10:58,112 ERROR [Thread-25] TimelineMetricAggregator:221 = - Exception during aggregating metrics.org.apache.phoenix.exception.Phoenix= IOException: org.apache.phoenix.exception.PhoenixIOException: Failed after = attempts=3D36, exceptions:Sat Apr 25 13:10:58 UTC 2015, null, java.net.Sock= etTimeoutException: callTimeout=3D900000, callDuration=3D938097: row '' on = table 'METRIC_RECORD' at region=3DMETRIC_RECORD,,1429966316307.947cfa22f884= d035c09fe804b1f5402c., hostname=3Ddtord01flm03p.dc.dotomi.net,60455,1429737= 430103, seqNum=3D243930 =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.util.ServerUtil.parseServ= erException(ServerUtil.java:107)=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.p= hoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:527)= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.iterate.MergeSortResultIt= erator.getIterators(MergeSortResultIterator.java:48)=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(Me= rgeSortResultIterator.java:63)=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.pho= enix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:90)= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.phoenix.iterate.MergeSortTopNResu= ltIterator.next(MergeSortTopNResultIterator.java:87)=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.ja= va:739)=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.yarn.server.applica= tionhistoryservice.metrics.timeline.TimelineMetricAggregator.aggregateMetri= csFromResultSet(TimelineMetricAggregator.java:104)=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.time= line.TimelineMetricAggregator.aggregate(TimelineMetricAggregator.java:72)= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.yarn.server.applicationhis= toryservice.metrics.timeline.AbstractTimelineAggregator.doWork(AbstractTime= lineAggregator.java:217)=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ya= rn.server.applicationhistoryservice.metrics.timeline.AbstractTimelineAggreg= ator.runOnce(AbstractTimelineAggregator.java:94)=C2=A0 =C2=A0 =C2=A0 =C2=A0= at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timelin= e.AbstractTimelineAggregator.run(AbstractTimelineAggregator.java:70) From: Jayesh Thakrar To: Siddharth Wagle ; "user@ambari.apache.org" =20 Sent: Wednesday, May 6, 2015 10:07 PM Subject: Re: Kafka broker metrics not appearing in REST API =20 Hi Siddharth, Yes, I am using=C2=A0Ambari 2.0 with Ambari Metrics service.The interesting= thing is that I got them for some time and not anymore.And I also know tha= t the metrics are being collected since i can see them on the dashboard.Any= pointer for troubleshooting? And btw, it would be nice to have a count of messages received and not a co= mputed metric count / min.TSDB does a good job of giving me cumulative and = rate-per-sec graphs and numbers. Thanks in advance,Jayesh =20 From: Siddharth Wagle To: "user@ambari.apache.org" ; Jayesh Thakrar =20 Sent: Wednesday, May 6, 2015 10:03 PM Subject: Re: Kafka broker metrics not appearing in REST API =20 #yiv1951040747 --P{margin-top:0;margin-bottom:0;}#yiv1951040747 Hi Jayesh, Are you using Ambari 2.0 with Ambari Metrics service? BR,Sid From: Jayesh Thakrar Sent: Wednesday, May 06, 2015 7:53 PM To: user@ambari.apache.org Subject: Kafka broker metrics not appearing in REST API=C2=A0Hi, I have installed 2 clusters with Ambari and Storm and Kafka.After the insta= ll, I was able to get metrics for both Storm and Kafka via REST API.This wo= rked fine for a week, but since the past 2 days, I have not been getting Ka= fka metrics. I need the metrics to push to an OpenTSDB cluster.I do get host metrics and= Nimbus metrics but not KAFKA_BROKER metrics. I did have maintenance turned on for some time, but maintenance is turned o= ff now. [jthakrar@dtord01hdp0101d ~]$ curl --user admin:admin 'http://dtord01flm01p= :8080/api/v1/clusters/ord_flume_kafka_prod/components/NIMBUS?fields=3Dmetri= cs'{=C2=A0 "href" : "http://dtord01flm01p:8080/api/v1/clusters/ord_flume_ka= fka_prod/components/NIMBUS?fields=3Dmetrics",=C2=A0 "ServiceComponentInfo" = : {=C2=A0 =C2=A0 "cluster_name" : "ord_flume_kafka_prod",=C2=A0 =C2=A0 "com= ponent_name" : "NIMBUS",=C2=A0 =C2=A0 "service_name" : "STORM"=C2=A0 },=C2= =A0 "metrics" : {=C2=A0 =C2=A0 "storm" : {=C2=A0 =C2=A0 =C2=A0 "nimbus" : {= =C2=A0 =C2=A0 =C2=A0 =C2=A0 "freeslots" : 54.0,=C2=A0 =C2=A0 =C2=A0 =C2=A0 = "supervisors" : 27.0,=C2=A0 =C2=A0 =C2=A0 =C2=A0 "topologies" : 0.0,=C2=A0 = =C2=A0 =C2=A0 =C2=A0 "totalexecutors" : 0.0,=C2=A0 =C2=A0 =C2=A0 =C2=A0 "to= talslots" : 54.0,=C2=A0 =C2=A0 =C2=A0 =C2=A0 "totaltasks" : 0.0,=C2=A0 =C2= =A0 =C2=A0 =C2=A0 "usedslots" : 0.0=C2=A0 =C2=A0 =C2=A0 }=C2=A0 =C2=A0 }=C2= =A0 }} [jthakrar@dtord01hdp0101d ~]$ curl --user admin:admin 'http://dtord01flm01p= :8080/api/v1/clusters/ord_flume_kafka_prod/components/KAFKA_BROKER?fields= =3Dmetrics'{=C2=A0 "href" : "http://dtord01flm01p:8080/api/v1/clusters/ord_= flume_kafka_prod/components/KAFKA_BROKER?fields=3Dmetrics",=C2=A0 "ServiceC= omponentInfo" : {=C2=A0 =C2=A0 "cluster_name" : "ord_flume_kafka_prod",=C2= =A0 =C2=A0 "component_name" : "KAFKA_BROKER",=C2=A0 =C2=A0 "service_name" := "KAFKA"=C2=A0 }} [jthakrar@dtord01hdp0101d ~]$ curl --user admin:admin 'http://dtord01flm01p= :8080/api/v1/clusters/ord_flume_kafka_prod/components/SUPERVISOR?fields=3Dm= etrics'{=C2=A0 "href" : "http://dtord01flm01p:8080/api/v1/clusters/ord_flum= e_kafka_prod/components/SUPERVISOR?fields=3Dmetrics",=C2=A0 "ServiceCompone= ntInfo" : {=C2=A0 =C2=A0 "cluster_name" : "ord_flume_kafka_prod",=C2=A0 =C2= =A0 "component_name" : "SUPERVISOR",=C2=A0 =C2=A0 "service_name" : "STORM"= =C2=A0 } =20 ------=_Part_1699407_882908677.1430969409722 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
More info.= ...

I was doi= ng some "stress-testing" and interestingly, the Metrics Collector crashed 2= times and I had to restart it (don't like a file-based HBase for the metri= cs collector, but not very confident of configuring the system to point to = an existing HBase cluster).

Also, after this = email thread, I looked up  the metrics collector logs and see errors l= ike this -

METRIC_RECOR= D' at region=3DMETRIC_RECORD,,1429966316307.947cfa22f884d035c09fe804b1f5402= c., hostname=3Ddtord01flm03p.dc.dotomi.net,60455,1429737430103, seqNum=3D24= 3930
13:09:37,619  INFO [phoenix-1-thread-349921] RpcRetryingCaller:129 -= Call exception, tries=3D11, retries=3D35, started=3D835564 ms ago, cancell= ed=3Dfalse, msg=3Drow 'kafka.network.RequestMetrics.Metadata-RequestsPerSec= .1MinuteRate^@dtord01flm27p.dc.dotomi.net^@^@^@^AL=C3=AF=C2=BF=C2=BD=C3=AF= =C2=BF=C2=BD:=C3=AF=C2=BF=C2=BDkafka_broker' on table 'METRIC_RECORD' at re= gion=3DMETRIC_RECORD,kafkark.RequestMetrics.Metadata-RequestsPerSec.1Minute= Rate\x00dtord01flm27p.dc.dotomi.net\x00\x00\x00\x01L\xED\xED:\xE5kafka_brok= er,1429966316307.d488f5e58d54c3251cb81fdfa475dd45., hostname=3Ddtord01flm03= p.dc.dotomi.net,60455,1429737430103, seqNum=3D243931
13:10:58,082  INFO [= phoenix-1-thread-349920] RpcRetryingCaller:129 - Call exception, tries=3D12= , retries=3D35, started=3D916027 ms ago, cancelled=3Dfalse, msg=3Drow '' on= table 'METRIC_RECORD' at region=3DMETRIC_RECORD,,1429966316307.947cfa22f88= 4d035c09fe804b1f5402c., hostname=3Ddtord01flm03p.dc.dotomi.net,60455,142973= 7430103, seqNum=3D243930
13:10:58,082  INFO [phoenix-1-thread-349921] Rpc= RetryingCaller:129 - Call exception, tries=3D12, retries=3D35, started=3D91= 6027 ms ago, cancelled=3Dfalse, msg=3Drow 'kafka.network.RequestMetrics.Met= adata-RequestsPerSec.1MinuteRate^@dtord01flm27p.dc.dotomi.net^@^@^@^AL=C3= =AF=C2=BF=C2=BD=C3=AF=C2=BF=C2=BD:=C3=AF=C2=BF=C2=BDkafka_broker' on table = 'METRIC_RECORD' at region=3DMETRIC_RECORD,kafkark.RequestMetrics.Metadata-R= equestsPerSec.1MinuteRate\x00dtord01flm27p.dc.dotomi.net\x00\x00\x00\x01L\x= ED\xED:\xE5kafka_broker,1429966316307.d488f5e58d54c3251cb81fdfa475dd45., ho= stname=3Ddtord01flm03p.dc.dotomi.net,60455,1429737430103, seqNum=3D243931
13:10:58,112 ERROR [Thread-25] TimelineMetri= cAggregator:221 - Exception during aggregating metrics.
org.apache.phoenix.exception.PhoenixIOException: org.apache.ph= oenix.exception.PhoenixIOException: Failed after attempts=3D36, exceptions:=
Sat Apr 25 13:10:58 UTC 2015, null, java.n= et.SocketTimeoutException: callTimeout=3D900000, callDuration=3D938097: row= '' on table 'METRIC_RECORD' at region=3DMETRIC_RECORD,,1429966316307.947cf= a22f884d035c09fe804b1f5402c., hostname=3Ddtord01flm03p.dc.dotomi.net,60455,= 1429737430103, seqNum=3D243930

        = at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:= 107)
        at org.apa= che.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:5= 27)
        at org.apac= he.phoenix.iterate.MergeSortResultIterator.getIterators(MergeSortResultIter= ator.java:48)
        a= t org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortR= esultIterator.java:63)
     =   at org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSor= tResultIterator.java:90)
    &nbs= p;   at org.apache.phoenix.iterate.MergeSortTopNResultIterator.next(Me= rgeSortTopNResultIterator.java:87)
  &= nbsp;     at org.apache.phoenix.jdbc.PhoenixResultSet.next(Phoeni= xResultSet.java:739)
        at org.apache.hadoop.yarn.ser= ver.applicationhistoryservice.metrics.timeline.TimelineMetricAggregator.agg= regateMetricsFromResultSet(TimelineMetricAggregator.java:104)
        at org.apache.hadoop.yarn.se= rver.applicationhistoryservice.metrics.timeline.TimelineMetricAggregator.ag= gregate(TimelineMetricAggregator.java:72)
&= nbsp;       at org.apache.hadoop.yarn.server.applicationhist= oryservice.metrics.timeline.AbstractTimelineAggregator.doWork(AbstractTimel= ineAggregator.java:217)
     = ;   at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics= .timeline.AbstractTimelineAggregator.runOnce(AbstractTimelineAggregator.jav= a:94)
        at org.ap= ache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.Abstract= TimelineAggregator.run(AbstractTimelineAggregator.java:70)




From: Jayesh Thakrar <j_thakrar@yahoo.com> To: Siddharth Wagle <= ;swagle@hortonworks.com>; "user@ambari.apache.org" <user@ambari.apach= e.org>
Sent: Wedne= sday, May 6, 2015 10:07 PM
Subjec= t: Re: Kafka broker metrics not appearing in REST API

=
Hi Siddhar= th,
<= span>
Yes, I am using Ambari 2.0 with Ambari Metr= ics service.
The interesting= thing is that I got them for some time and not anymore.
And I also know that the metrics are be= ing collected since i can see them on the dashboard.
Any pointer for troub= leshooting?

And btw, it would b= e nice to have a count of messages received and not a computed metric count= / min.
TSDB does a g= ood job of giving me cumulative and rate-per-sec graphs and numbers.=

T= hanks in advance,
Jayesh



<= span style=3D"font-weight:bold;">From: Siddharth Wagle <swagl= e@hortonworks.com>
To: "user@ambari.apache.org" <user@ambari.apache.org>; = Jayesh Thakrar <j_thakrar@yahoo.com>
Sent: Wednesday, May 6, 2015 10:03 PM Subject: = Re: Kafka broker metrics not appearing in REST API

Hi Jayesh,

Are you using Amb= ari 2.0 with Ambari Metrics service?

BR,
Sid



From: Jayesh = Thakrar <j_thakrar@yahoo.com>
Sent: Wednesday, May 06, 2015 7:53 PM
To: user@ambari.apache.org
Subject: Kafka broker metrics not appearing in REST API
 
Hi,

I have installed = 2 clusters with Ambari and Storm and Kafka.
After= the install, I was able to get metrics for both Storm and Kafka via REST A= PI.
This worked fine = for a week, but since the past 2 days, I have not been getting Kafka metric= s.

I need the metric= s to push to an OpenTSDB cluster.
I do = get host metrics and Nimbus metrics but not KAFKA_BROKER metrics.

I did have mainte= nance turned on for some time, but maintenance is turned off now.

[jthakrar@dtord01hdp0101d ~]$ curl --user admin:admin 'ht= tp://dtord01flm01p:8080/api/v1/clusters/ord_flume_kafka_prod/components/NIM= BUS?fields=3Dmetrics'
{
  "href" : "http://dtord01flm01p:8080/api/v1/cluster= s/ord_flume_kafka_prod/components/NIMBUS?fields=3Dmetrics",
  "ServiceComponentInfo" : {
    "cluster_name" : "ord_flume_kafka_prod",
    "component_name" : "NIMBUS",
    "service_name" : "STORM"
  },
  "metrics" : {
    "storm" : {
      "nimbus" : {
        "freeslots" : 54.0,
        "supervisors" : 27.0,
        "topologies" : 0.0,
        "totalexecutors" : 0.0,
        "totalslots" : 54.0,
        "totaltasks" : 0.0,
        "usedslots" : 0.0
      }
    }
  }
}

[jthakrar@dtord01hdp0101d ~]$ curl --user admin:admin 'ht= tp://dtord01flm01p:8080/api/v1/clusters/ord_flume_kafka_prod/components/KAF= KA_BROKER?fields=3Dmetrics'
{
  "href" : "http://dtord01flm01p:8080/api/v1/cluster= s/ord_flume_kafka_prod/components/KAFKA_BROKER?fields=3Dmetrics",
  "ServiceComponentInfo" : {
    "cluster_name" : "ord_flume_kafka_prod",
    "component_name" : "KAFKA_BROKER",
    "service_name" : "KAFKA"
  }
}

[jthakrar@dtord01hdp0101d ~]$ curl --user admin:admin 'ht= tp://dtord01flm01p:8080/api/v1/clusters/ord_flume_kafka_prod/components/SUP= ERVISOR?fields=3Dmetrics'
{
  "href" : "http://dtord01flm01p:8080/api/v1/cluster= s/ord_flume_kafka_prod/components/SUPERVISOR?fields=3Dmetrics",
  "ServiceComponentInfo" : {
    "cluster_name" : "ord_flume_kafka_prod",
    "component_name" : "SUPERVISOR",
    "service_name" : "STORM"
  }





------=_Part_1699407_882908677.1430969409722--