Return-Path: X-Original-To: apmail-ambari-user-archive@www.apache.org Delivered-To: apmail-ambari-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 961A018AC2 for ; Wed, 29 Jul 2015 20:11:49 +0000 (UTC) Received: (qmail 41498 invoked by uid 500); 29 Jul 2015 20:11:15 -0000 Delivered-To: apmail-ambari-user-archive@ambari.apache.org Received: (qmail 41466 invoked by uid 500); 29 Jul 2015 20:11:15 -0000 Mailing-List: contact user-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ambari.apache.org Delivered-To: mailing list user@ambari.apache.org Received: (qmail 41456 invoked by uid 99); 29 Jul 2015 20:11:15 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Jul 2015 20:11:15 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 777B9D8A7C for ; Wed, 29 Jul 2015 20:11:14 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.902 X-Spam-Level: ** X-Spam-Status: No, score=2.902 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id hXfDOestrQW7 for ; Wed, 29 Jul 2015 20:11:03 +0000 (UTC) Received: from mail-vn0-f45.google.com (mail-vn0-f45.google.com [209.85.216.45]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id A480C210DF for ; Wed, 29 Jul 2015 20:11:02 +0000 (UTC) Received: by vnaa140 with SMTP id a140so4745071vna.2 for ; Wed, 29 Jul 2015 13:11:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=oBfdF8LJujPTwqQ33OK4yDbiS31v6TcrZYPvX6RbT8g=; b=kqOGlTyOMo+y5G10Smpgu7ffWqHGJQsAtbLkzqoktgn/I+yLfuMGMDBX3WdHT5Leck b7rWFRpwWNgU3YXSgPWTiDcdq8kaK6JQwl9w1U0VnYbvFZV6IRMrrBPXNRc3zGxlqqZd fRwKsP720yGFERjBkpyflBxyFVWyhdQJbqyTM7cBVxRA8JRHpmdM2FU1xcN7BYxbNQgM vyqV11k1kXNB48hkeUrpFmSPaE2KiTnfRWfwb57HfQ2IYafppgJfWpzA2IZjBRUw/JHl wmy75l0OSQpAg12VYc4s6oe+KmYtybeuPx+S+apDccrCjsh6NmR5ZP08RPHey1/2rF7/ oGXg== MIME-Version: 1.0 X-Received: by 10.52.240.236 with SMTP id wd12mr56806608vdc.77.1438200661591; Wed, 29 Jul 2015 13:11:01 -0700 (PDT) Received: by 10.31.64.10 with HTTP; Wed, 29 Jul 2015 13:11:01 -0700 (PDT) In-Reply-To: <1438198716443.18561@hortonworks.com> References: <1437774282253.41001@hortonworks.com> <1437883294469.28940@hortonworks.com> <1437883845278.92889@hortonworks.com> <1438018816617.88122@hortonworks.com> <1438051820871.72889@hortonworks.com> <1438198716443.18561@hortonworks.com> Date: Wed, 29 Jul 2015 16:11:01 -0400 Message-ID: Subject: Re: Posting Metrics to Ambari From: Bryan Bende To: user@ambari.apache.org Content-Type: multipart/alternative; boundary=20cf307d004c8ccaf2051c092f90 --20cf307d004c8ccaf2051c092f90 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Feel free to update as necessary: https://issues.apache.org/jira/browse/AMBARI-12584 -Bryan On Wed, Jul 29, 2015 at 3:38 PM, Siddharth Wagle wrote: > Hi Bryan, > > > Please go ahead and file a Jira. HBASE/Phoenix is case sensitive. > Ideally we should retain the sensitivity meaning, if you POST lowercase y= ou > are expected to query in lowercase. > > > Will look in to the code and continue discussion on the Jira. > > > Regards, > > Sid > > > ------------------------------ > *From:* Bryan Bende > *Sent:* Wednesday, July 29, 2015 12:28 PM > > *To:* user@ambari.apache.org > *Subject:* Re: Posting Metrics to Ambari > > FWIW I was finally able to get this to work and the issue seems to be > case sensitivity in the appId field... > > Send a metric with APP_ID=3D*NIFI* then query: > > http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Rece= ived_Last_5_mins&appId=3D > *NIFI*&hostname=3Dlocalhost&startTime=3D1438193080000&endTime=3D143819308= 2000 > Gets 0 results. > > Send a metric with APP_ID=3D*nifi* then query: > > http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Rece= ived_Last_5_mins&appId=3D > *NIFI*&hostname=3Dlocalhost&startTime=3D1438193080000&endTime=3D143819308= 2000 > Gets results, even though appId=3DNIFI in the query. > > Would this be worthy of a jira? I would expect that if the query side is > always going to search lower case, than the ingest side should be > normalizing to lower case. > > Thanks, > > Bryan > > > On Tue, Jul 28, 2015 at 1:07 PM, Bryan Bende wrote: > >> As an update, I was able to create a new service and get it installed in >> Ambari, and got a widget to display on the metrics panel for the service= . >> >> So now it seems like the missing piece is getting the metrics exposed >> through the Ambari REST API, which may or may not be related to not gett= ing >> results from the collector service API. I have a metrics.json with the >> following: >> >> { >>> "NIFI_MASTER": { >>> "Component": [{ >>> "type": "ganglia", >>> "metrics": { >>> "default": { >>> "metrics/nifi/FlowFilesReceivedLast5mins": { >>> "metric": "FlowFiles_Received_Last_5_mins", >>> "pointInTime": false, >>> "temporal": true >>> } >>> } >>> } >>> }] >>> } >>> } >> >> >> and widgets.json with the following: >> >> { >>> "layouts": [ >>> { >>> "layout_name": "default_nifi_dashboard", >>> "display_name": "Standard NiFi Dashboard", >>> "section_name": "NIFI_SUMMARY", >>> "widgetLayoutInfo": [ >>> { >>> "widget_name": "Flow Files Received Last 5 mins", >>> "description": "The number of flow files received in the last >>> 5 minutes.", >>> "widget_type": "GRAPH", >>> "is_visible": true, >>> "metrics": [ >>> { >>> "name": "FlowFiles_Received_Last_5_mins", >>> "metric_path": "metrics/nifi/FlowFilesReceivedLast5mins", >>> "service_name": "NIFI", >>> "component_name": "NIFI_MASTER" >>> } >>> ], >>> "values": [ >>> { >>> "name": "Flow Files Received", >>> "value": "${FlowFiles_Received_Last_5_mins}" >>> } >>> ], >>> "properties": { >>> "display_unit": "%", >>> "graph_type": "LINE", >>> "time_range": "1" >>> } >>> } >>> ] >>> } >>> ] >>> } >> >> >> Hitting this end-point doesn't show any metrics though: >> >> http://localhost:8080/api/v1/clusters/Sandbox/services/NIFI/components/N= IFI_MASTER >> >> -Bryan >> >> >> On Tue, Jul 28, 2015 at 9:39 AM, Bryan Bende wrote: >> >>> The data is present in the aggregate tables... >>> >>> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from *METRIC_RECORD* >>> WHERE METRIC_NAME =3D 'FlowFiles_Received_Last_5_mins' order by SERVER_= TIME >>> desc limit 10; >>> >>> >>> +------------------------------------------+---------------------------= ---------------+------------------------------------------+----------------= --------------------------+-------------------------------------------+ >>> >>> | METRIC_NAME | HOSTNAME >>> | SERVER_TIME | >>> APP_ID | INSTANCE_ID >>> | >>> >>> >>> +------------------------------------------+---------------------------= ---------------+------------------------------------------+----------------= --------------------------+-------------------------------------------+ >>> >>> | FlowFiles_Received_Last_5_mins | localhost >>> | 1438047369541 | NIFI >>> | 5dbaaa80-0760-4241-80aa-b00b52f8efb4 | >>> >>> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from >>> *METRIC_RECORD_MINUTE* WHERE METRIC_NAME =3D >>> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10; >>> >>> >>> +------------------------------------------+---------------------------= ---------------+------------------------------------------+----------------= --------------------------+-------------------------------------------+ >>> >>> | METRIC_NAME | HOSTNAME >>> | APP_ID | >>> INSTANCE_ID | SERVER_TIME >>> | >>> >>> >>> +------------------------------------------+---------------------------= ---------------+------------------------------------------+----------------= --------------------------+-------------------------------------------+ >>> >>> | FlowFiles_Received_Last_5_mins | localhost >>> | NIFI | 5dbaaa80-0760-4241= -80aa-b00b52f8efb4 >>> | 1438047369541 | >>> >>> >>> 0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from >>> *METRIC_RECORD_HOURLY* WHERE METRIC_NAME =3D >>> 'FlowFiles_Received_Last_5_mins' order by SERVER_TIME desc limit 10; >>> >>> >>> +------------------------------------------+---------------------------= ---------------+------------------------------------------+----------------= --------------------------+-------------------------------------------+ >>> >>> | METRIC_NAME | HOSTNAME >>> | APP_ID | >>> INSTANCE_ID | SERVER_TIME >>> | >>> >>> >>> +------------------------------------------+---------------------------= ---------------+------------------------------------------+----------------= --------------------------+-------------------------------------------+ >>> >>> | FlowFiles_Received_Last_5_mins | localhost >>> | NIFI | 5dbaaa80-0760-4241= -80aa-b00b52f8efb4 >>> | 1438045569276 | >>> >>> >>> Trying a smaller time range (2 mins surrounding the timestamp from the >>> first record above).... >>> >>> >>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Re= ceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D1438047300= 000&endTime=3D1438047420000 >>> >>> >>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Re= ceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D1438047300= 000&endTime=3D1438047420000&precision=3Dseconds >>> >>> >>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Re= ceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D1438047300= 000&endTime=3D1438047420000&precision=3Dminutes >>> >>> >>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Re= ceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D1438047300= 000&endTime=3D1438047420000&precision=3Dhours >>> >>> >>> Those all get no results. The only time I got a difference response, >>> was this example: >>> >>> >>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Re= ceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D1438045569= 27&endTime=3D1438047420000 >>> >>> which returned: >>> >>> {"exception":"BadRequestException","message":"java.lang.Exception: The = time range query for precision table exceeds row count limit, please query = aggregate table instead.","javaClassName":"org.apache.hadoop.yarn.webapp.Ba= dRequestException"} >>> >>> >>> On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagle < >>> swagle@hortonworks.com> wrote: >>> >>>> For Step1, when you say exposing metrics through the Ambari REST >>>> API... are you talking about the metrics collector REST API, or throug= h the >>>> Ambari Server REST API? >>>> >>>> Answer: Ambari REST API: Note that this is intended use because this i= s >>>> what ties the metrics to you your cluster resources, example: You can = query >>>> for say give me metrics for the active Namenode only using Ambari's AP= I. >>>> >>>> >>>> >>>> Is SERVER_TIME the field that has to fall between startTime and >>>> endTime? >>>> >>>> Yes. That is correct >>>> >>>> >>>> There is nothing special about the query you seem to have the >>>> fragments right, only this is you are query for a large time window, >>>> AMS would not return data from METRIC_RECORD table for a such a large = time >>>> window it would try to find this in the aggregate >>>> table, METRIC_RECORD_MINUTE or HOURLY. Try reducing you time range, >>>> also check the aggregate tables, the data should still be present in t= hose >>>> tables. >>>> >>>> >>>> Precision params: >>>> >>>> >>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+A= PI+Specification >>>> >>>> >>>> -Sid >>>> >>>> >>>> ------------------------------ >>>> *From:* Bryan Bende >>>> *Sent:* Monday, July 27, 2015 6:21 PM >>>> >>>> *To:* user@ambari.apache.org >>>> *Subject:* Re: Posting Metrics to Ambari >>>> >>>> Hi Jaimin, >>>> >>>> For Step1, when you say exposing metrics through the Ambari REST >>>> API... are you talking about the metrics collector REST API, or throug= h the >>>> Ambari Server REST API? >>>> >>>> I am able to see data through Phoenix, as an example: >>>> >>>> +------------------------------------------+--------------------------= ----------------+------------------------------------------+---------------= ---------------------------+ >>>> >>>> | METRIC_NAME | HOSTNAME >>>> | SERVER_TIME | >>>> APP_ID | >>>> >>>> >>>> +------------------------------------------+--------------------------= ----------------+------------------------------------------+---------------= ---------------------------+ >>>> >>>> | FlowFiles_Received_Last_5_mins | localhost >>>> | 1438045869329 | NIFI >>>> | >>>> >>>> >>>> Then I try to use this API call: >>>> >>>> >>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_R= eceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D143787033= 2000&endTime=3D1438129532000 >>>> >>>> and I get: {"metrics":[]} >>>> >>>> Something must not be lining up with what I am sending over. Is >>>> SERVER_TIME the field that has to fall between startTime and endTime? >>>> >>>> -Bryan >>>> >>>> On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly >>>> wrote: >>>> >>>>> Hi Bryan, >>>>> >>>>> >>>>> There are 2 steps in this that needs to be achieved. >>>>> >>>>> >>>>> STEP-1: Exposing service metrics successfully through Ambari >>>>> REST API >>>>> >>>>> STEP-2: Ambari UI displaying widgets comprised from newly exposed >>>>> metrics via Ambari server. >>>>> >>>>> >>>>> >>>>> As step-1 is pre-requisite to step-2, can you confirm that you were >>>>> able to achieve step-1 (exposing service metrics successfully through >>>>> Ambari REST API) ? >>>>> >>>>> >>>>> *NOTE:* >>>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0= /metrics.json >>>>> are the metrics specific to Ambari metrics service. If the new >>>>> metrics that you want to expose are related to any other service then >>>>> please edit/create metrics.json file in that specific service package= and >>>>> not in Ambari metrics service package. widgets.json also needs to be >>>>> changed/added in the same service package and not at >>>>> /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json >>>>> (unless you want to add system heatmaps for a stack that inherits HDP= -2.0.6 >>>>> stack). >>>>> >>>>> >>>>> >>>>> -- Thanks >>>>> >>>>> Jaimin >>>>> ------------------------------ >>>>> *From:* Bryan Bende >>>>> *Sent:* Sunday, July 26, 2015 2:10 PM >>>>> >>>>> *To:* user@ambari.apache.org >>>>> *Subject:* Re: Posting Metrics to Ambari >>>>> >>>>> >>>>> Hi Sid, >>>>> >>>>> Thanks for the pointers about how to add a metric to the UI. Based on >>>>> those instructions I modified >>>>> /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0= /metrics.json >>>>> and added the following based on the test metrics I posted: >>>>> >>>>> "metrics/SmokeTest/FakeMetric": { >>>>> >>>>> "metric": "AMBARI_METRICS.SmokeTest.FakeMetric", >>>>> >>>>> "pointInTime": true, >>>>> >>>>> "temporal": true >>>>> >>>>> } >>>>> >>>>> From digging around the filesystem there appears to be a widgets.json >>>>> in /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It= looks >>>>> like this file only contained the definitions of the heatmaps, so I w= asn't >>>>> sure if this was the right place, but just to see what happened I mod= ified >>>>> it as follows: >>>>> >>>>> 1) Added a whole new layout: >>>>> >>>>> http://pastebin.com/KqeT8xfe >>>>> >>>>> 2) Added a heatmap for the test metric: >>>>> >>>>> http://pastebin.com/AQDT7u6v >>>>> >>>>> Then I restarted the HDP VM but I don't see anything in the UI under >>>>> Metric Actions -> Add, or under Heatmaps. Anything that seems complet= ely >>>>> wrong about what I did? Maybe I should be going down the route of def= ining >>>>> a new service type for system I will be sending metrics from? >>>>> >>>>> Sorry to keep bothering with all these questions, I just don't have >>>>> any previous experience with Ambari. >>>>> >>>>> Thanks, >>>>> >>>>> Bryan >>>>> >>>>> On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagle < >>>>> swagle@hortonworks.com> wrote: >>>>> >>>>>> The AMS API does not allow open ended queries so startTime and >>>>>> endTime are required fields, the curl call should return the error c= ode >>>>>> with the apt response. >>>>>> >>>>>> >>>>>> If this doesn't happen please go ahead and file a Jira. >>>>>> >>>>>> >>>>>> Using AMS through Ambari UI after getting the plumbing work with >>>>>> metrics.json completed would be much easier. The AMS API does need s= ome >>>>>> refinement. Jiras / Bugs are welcome. >>>>>> >>>>>> >>>>>> -Sid >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> *From:* Siddharth Wagle >>>>>> *Sent:* Saturday, July 25, 2015 9:01 PM >>>>>> >>>>>> *To:* user@ambari.apache.org >>>>>> *Subject:* Re: Posting Metrics to Ambari >>>>>> >>>>>> >>>>>> No dev work need only need to modify metrics.json file and then add >>>>>> widget from UI. >>>>>> >>>>>> >>>>>> Stack details: >>>>>> >>>>>> >>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Met= rics >>>>>> >>>>>> >>>>>> UI specifics: >>>>>> >>>>>> >>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+= Dashboard >>>>>> >>>>>> >>>>>> -Sid >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> *From:* Bryan Bende >>>>>> *Sent:* Saturday, July 25, 2015 7:10 PM >>>>>> *To:* user@ambari.apache.org >>>>>> *Subject:* Re: Posting Metrics to Ambari >>>>>> >>>>>> Quick update, I was able to connect with the phoenix 4.2.2 client >>>>>> and I did get results querying with: >>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =3D >>>>>> 'AMBARI_METRICS.SmokeTest.FakeMetric' order by SERVER_TIME desc limi= t 10; >>>>>> >>>>>> Now that I know the metrics are posting, I am less concerned about >>>>>> querying through the REST API. >>>>>> >>>>>> Is there any way to get a custom metric added to the main page of >>>>>> Ambari? or does this require development work? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Bryan >>>>>> >>>>>> On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende >>>>>> wrote: >>>>>> >>>>>>> Hi Sid, >>>>>>> >>>>>>> Thanks for the suggestions. I turned on DEBUG for the metrics >>>>>>> collector (had to do this through the Ambari UI configs section) an= d now I >>>>>>> can see some activity... When I post a metric I see: >>>>>>> >>>>>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics] TimelineWebServices:270 - Storing metrics:= { >>>>>>> >>>>>>> "metrics" : [ { >>>>>>> >>>>>>> "timestamp" : 1432075898000, >>>>>>> >>>>>>> "metricname" : "AMBARI_METRICS.SmokeTest.FakeMetric", >>>>>>> >>>>>>> "appid" : "amssmoketestfake", >>>>>>> >>>>>>> "hostname" : "localhost", >>>>>>> >>>>>>> "starttime" : 1432075898000, >>>>>>> >>>>>>> "metrics" : { >>>>>>> >>>>>>> "1432075898000" : 0.963781711428, >>>>>>> >>>>>>> "1432075899000" : 1.432075898E12 >>>>>>> >>>>>>> } >>>>>>> >>>>>>> } ] >>>>>>> >>>>>>> } >>>>>>> >>>>>>> 01:30:18,372 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics] DefaultPhoenixDataSource:67 - Metric store >>>>>>> connection url: jdbc:phoenix:localhost:61181:/hbase >>>>>>> >>>>>>> 01:30:18,376 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics] MutationState:361 - Sending 2 mutations fo= r >>>>>>> METRIC_RECORD with 8 key values of total size 925 bytes >>>>>>> >>>>>>> 01:30:18,380 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics] MutationState:436 - Total time for batch c= all of >>>>>>> 2 mutations into METRIC_RECORD: 3 ms >>>>>>> >>>>>>> 01:30:18,381 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics] log:40 - RESPONSE /ws/v1/timeline/metrics = 200 >>>>>>> >>>>>>> >>>>>>> So it looks like it posted successfully. Then I hit: >>>>>>> >>>>>>> >>>>>>> http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DAMBARI_M= ETRICS.SmokeTest.FakeMetric >>>>>>> >>>>>>> and I see... >>>>>>> >>>>>>> 01:31:16,952 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics?metricNames=3DAMBARI_METRICS.SmokeTest.Fake= Metric] >>>>>>> ParallelIterators:412 - Guideposts: ] >>>>>>> >>>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics?metricNames=3DAMBARI_METRICS.SmokeTest.Fake= Metric] >>>>>>> ParallelIterators:481 - The parallelScans: >>>>>>> [[{"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METR= ICS.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x= 01","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"= families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","cac= hing":-1}]] >>>>>>> >>>>>>> 01:31:16,953 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics?metricNames=3DAMBARI_METRICS.SmokeTest.Fake= Metric] >>>>>>> BaseQueryPlan:243 - Iterator ready: MergeSortTopNResultIterator [li= mit=3D1, >>>>>>> count=3D0, orderByColumns=3D[METRIC_NAME DESC, SERVER_TIME DESC], p= tr1=3D, ptr2=3D] >>>>>>> >>>>>>> 01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - >>>>>>> Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: >>>>>>> {"timeRange":[0,1437874276946],"batch":-1,"startRow":"AMBARI_METRIC= S.SmokeTest.FakeMetric","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01= ","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"fa= milies":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"","cachi= ng":10000} >>>>>>> >>>>>>> 01:31:16,959 DEBUG [95266635@qtp-171166092-2 - >>>>>>> /ws/v1/timeline/metrics?metricNames=3DAMBARI_METRICS.SmokeTest.Fake= Metric] >>>>>>> PhoenixHBaseAccessor:552 - Aggregate records size: 0 >>>>>>> >>>>>>> I'll see if I can get the phoenix client working and see what that >>>>>>> returns. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Bryan >>>>>>> >>>>>>> On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle < >>>>>>> swagle@hortonworks.com> wrote: >>>>>>> >>>>>>>> Hi Bryan, >>>>>>>> >>>>>>>> >>>>>>>> Few things you can do: >>>>>>>> >>>>>>>> >>>>>>>> 1. Turn on DEBUG mode by changing log4j.properties at, >>>>>>>> /etc/ambari-metrics-collector/conf/ >>>>>>>> >>>>>>>> This might reveal more info, I don't think we print every metrics >>>>>>>> received to the log in 2.0 or 2.1, I did add this option if TRACE = is >>>>>>>> enabled to trunk recently. >>>>>>>> >>>>>>>> >>>>>>>> 2. Connect using Phoenix directly and you can do a SELECT query >>>>>>>> like this: >>>>>>>> >>>>>>>> SELECT * from METRIC_RECORD WHERE METRIC_NAME =3D >>>>>>>> '' order by SERVER_TIME desc limit 10; >>>>>>>> >>>>>>>> >>>>>>>> Instructions for connecting to Phoenix: >>>>>>>> >>>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Phoenix+Schema >>>>>>>> >>>>>>>> >>>>>>>> 3. What API call are you making to get metrics? >>>>>>>> >>>>>>>> E.g.: http:// >>>>>>>> :6188/ws/v1/timeline/metrics?metricNames=3D&startTime=3D&endTime=3D&hostname=3D >>>>>>>> >>>>>>>> >>>>>>>> -Sid >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------ >>>>>>>> *From:* Bryan Bende >>>>>>>> *Sent:* Friday, July 24, 2015 2:03 PM >>>>>>>> *To:* user@ambari.apache.org >>>>>>>> *Subject:* Posting Metrics to Ambari >>>>>>>> >>>>>>>> I'm interested in sending metrics to Ambari and I've been >>>>>>>> looking at the Metrics Collector REST API described here: >>>>>>>> >>>>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collect= or+API+Specification >>>>>>>> >>>>>>>> I figured the easiest way to test it would be to get the latest >>>>>>>> HDP Sandbox... so I downloaded and started it up. The Metrics Coll= ector >>>>>>>> service wasn't running so I started it, and also added port 6188 t= o the VM >>>>>>>> port forwarding. From there I used the example POST on the Wiki pa= ge and >>>>>>>> made a successful POST which got a 200 response. After that I trie= d the >>>>>>>> query, but could never get any results to come back. >>>>>>>> >>>>>>>> I know this list is not specific to HDP, but I was wondering if >>>>>>>> anyone has any suggestions as to what I can look at to figure out = what is >>>>>>>> happening with the data I am posting. >>>>>>>> >>>>>>>> I was watching the metrics collector log while posting and >>>>>>>> querying and didn't see any activity besides the periodic aggregat= ion. >>>>>>>> >>>>>>>> Any suggestions would be greatly appreciated. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Bran >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > --20cf307d004c8ccaf2051c092f90 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Feel free to update as necessary:
https://issues.apache.= org/jira/browse/AMBARI-12584

-Bryan

On Wed, Jul 29, 2015 a= t 3:38 PM, Siddharth Wagle <swagle@hortonworks.com> wro= te:

Hi Bryan,


Please go ahead and file a Jira. HBASE/Phoenix is case sensitive. Ideall= y we should retain the sensitivity meaning, if you POST lowercase you are e= xpected to query in lowercase.


Will look in to the code and continue discussion on the Jira.


Regards,

Sid



From: Bryan Bende <bbende@gmail.com>
Sent: Wednesday, July 29, 2015 12:28 PM

To: user= @ambari.apache.org
Subject: Re: Posting Metrics to Ambari
=C2=A0
FWIW I was finally able to get this to work and the issue = seems to be case sensitivity in the appId field...

Send a metric with APP_ID=3DNIFI then query:
http://local= host:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Received_Last_5_mi= ns&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D143= 8193080000&endTime=3D1438193082000
Gets 0 results.

Send a metric with APP_ID=3Dnifi then query:
http://local= host:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Received_Last_5_mi= ns&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D143= 8193080000&endTime=3D1438193082000
Gets results, even though appId=3DNIFI in the query.

Would this be worthy of a jira? I would expect that if the query side = is always going to search lower case, than the ingest side should be normal= izing to lower case.

Thanks,

Bryan


On Tue, Jul 28, 2015 at 1:07 PM, Bryan Bende <bbende@gmail.com<= /a>> wrote:
As an update, I was able to create a new service and get i= t installed in Ambari, and got a widget to display on the metrics panel for= the service.=C2=A0

So now it seems like the missing piece is getting the metrics exposed = through the Ambari REST API, which may or may not be related to not getting= results from the collector service API. I have a metrics.json with the fol= lowing:

{
=C2=A0 "NIFI_MASTER": {
=C2=A0 =C2=A0 "Component": [{
=C2=A0 =C2=A0 =C2=A0 =C2=A0 "type": "ganglia&qu= ot;,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 "metrics": {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "default": {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "metrics/nifi/F= lowFilesReceivedLast5mins": {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "metric&= quot;: "FlowFiles_Received_Last_5_mins",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "pointIn= Time": false,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "tempora= l": true
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 }]
=C2=A0 }
}

and widgets.json with the following:

{
=C2=A0 "layouts": [
=C2=A0 =C2=A0 {
=C2=A0 =C2=A0 =C2=A0 "layout_name": "default_ni= fi_dashboard",
=C2=A0 =C2=A0 =C2=A0 "display_name": "Standard = NiFi Dashboard",
=C2=A0 =C2=A0 =C2=A0 "section_name": "NIFI_SUMM= ARY",
=C2=A0 =C2=A0 =C2=A0 "widgetLayoutInfo": [
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "widget_name": &q= uot;Flow Files Received Last 5 mins",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "description": &q= uot;The number of flow files received in the last 5 minutes.",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "widget_type": &q= uot;GRAPH",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "is_visible": tru= e,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "metrics": [
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "name&qu= ot;: "FlowFiles_Received_Last_5_mins",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "metric_= path": "metrics/nifi/FlowFilesReceivedLast5mins",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "service= _name": "NIFI",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "compone= nt_name": "NIFI_MASTER"
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ],
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "values": [
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "name&qu= ot;: "Flow Files Received",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "value&q= uot;: "${FlowFiles_Received_Last_5_mins}"
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ],
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "properties": { =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "display_unit&q= uot;: "%",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "graph_type&quo= t;: "LINE",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "time_range&quo= t;: "1"
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 ]
=C2=A0 =C2=A0 }
=C2=A0 ]
}=C2=A0

Hitting this end-point doesn't show any metrics though:

-Bryan
=C2=A0

On Tue, Jul 28, 2015 at 9:39 AM, Bryan Bende <bbende@gmail.com<= /a>> wrote:
The data is present in the aggregate tables...

0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from METRIC= _RECORD WHERE METRIC_NAME =3D 'FlowFiles_Received_Last_5_mins' = order by SERVER_TIME desc limit 10;

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+-------------------------------------------= +

| =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ME= TRIC_NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 HOSTNAME =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 SERVER_TIME=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 APP_ID=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 INSTANCE_ID = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+-------------------------------------------= +

| FlowFiles_Received_Last_5_mins =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 | localhost=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 1438047369541=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | NIFI =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 5dbaaa80-0760-4241-80aa-b00b52f8efb4=C2=A0 =C2=A0 =C2=A0 |


0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from METRIC= _RECORD_MINUTE WHERE METRIC_NAME =3D 'FlowFiles_Received_Last_5_min= s' order by SERVER_TIME desc limit 10;

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+-------------------------------------------= +

| =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ME= TRIC_NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 HOSTNAME =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 APP_ID= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 INSTANCE_ID= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 SERVER_TIME =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+-------------------------------------------= +

| FlowFiles_Received_Last_5_mins =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 | localhost=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | NIFI =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 5dbaaa80-0760-4241-80aa-b00b52f8efb4 =C2=A0 =C2=A0 | 1438047369541 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |


0: jdbc:phoenix:localhost:61181:/hbase> SELECT * from METRIC= _RECORD_HOURLY WHERE METRIC_NAME =3D 'FlowFiles_Received_Last_5_min= s' order by SERVER_TIME desc limit 10;

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+-------------------------------------------= +

| =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ME= TRIC_NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 HOSTNAME =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 APP_ID= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 INSTANCE_ID= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 SERVER_TIME =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+-------------------------------------------= +

| FlowFiles_Received_Last_5_mins =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 | localhost=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | NIFI =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 5dbaaa80-0760-4241-80aa-b00b52f8efb4 =C2=A0 =C2=A0 | 1438045569276 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |


Trying a smaller time range (2 mins surrounding the timestamp from the f= irst record above)....

ht= tp://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Received= _Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D1438= 047300000&endTime=3D1438047420000

http://localhost:6188/ws/v1/timeline/metrics?metricNa= mes=3DFlowFiles_Received_Last_5_mins&appId=3DNIFI&hostname=3Dlocalh= ost&startTime=3D1438047300000&endTime=3D1438047420000&precision= =3Dseconds

http://localhost:6188/ws/v1/timeline/metrics?metricNa= mes=3DFlowFiles_Received_Last_5_mins&appId=3DNIFI&hostname=3Dlocalh= ost&startTime=3D1438047300000&endTime=3D1438047420000&precision= =3Dminutes

http://localhost:6188/ws/v1/timeline/metrics?metricName= s=3DFlowFiles_Received_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhos= t&startTime=3D1438047300000&endTime=3D1438047420000&precision= =3Dhours


Those all get no results. The only time I got a difference response, was= this example:

htt= p://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Received_= Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime=3D14380= 4556927&endTime=3D1438047420000

which returned:

{=
"exception":"BadRequestException","message":&=
quot;java.lang.Exception: The time range query for precision table exceeds =
row count limit, please query aggregate table instead.","javaClas=
sName":"org.apache.hadoop.yarn.webapp.BadRequestException"}<=
/pre>

On Mon, Jul 27, 2015 at 10:50 PM, Siddharth Wagl= e <swagle@hort= onworks.com> wrote:

For Step1, when you say exposing metrics through the Ambari REST API... = are you talking about the metrics collector REST API, or through the Ambari= Server REST API?

Answer: Ambari REST API: Note that this is intended use because this is = what ties the metrics to you your cluster resources, example: You can query= for say give me metrics for the active Namenode only using Ambari's AP= I.


=


=

Is SERVER_T= IME the field that has to fall between startTime and endTime?=

Yes. That is correct


There is nothing special about the=C2=A0query =C2=A0you seem to have the= fragments right, only this is=C2=A0you are=C2=A0query for a large time win= dow, AMS=C2=A0would=C2=A0not return data=C2=A0from METRIC_RECORD table for = a such a large=C2=A0time window it would try to find this in the=C2=A0aggre= gate table,=C2=A0METRIC_RECORD_MINUTE or=C2=A0HOURLY. Try reducing you time ran= ge, also=C2=A0check the aggregate tables,=C2=A0the data should still be pre= sent in those tables.


Precision params:

https://cwiki.apache.org/confl= uence/display/AMBARI/Metrics+Collector+API+Specification


-Sid



From: Bryan Bende <bbende@gmail.com>
Sent: Monday, July 27, 2015 6:21 PM

To: user= @ambari.apache.org
Subject: Re: Posting Metrics to Ambari
=C2=A0
Hi Jaimin,

For Step1, when you say exposing metrics through the Ambari REST API..= . are you talking about the metrics collector REST API, or through the Amba= ri Server REST API?

I am able to see data through Phoenix, as an example:
+------------------------------------------+--------------------------= ----------------+------------------------------------------+---------------= ---------------------------+

| =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ME= TRIC_NAME=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 HOSTNAME =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 SERVER_TIME=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 APP_ID=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |<= /span>

+------------------------------------------+----------------------= --------------------+------------------------------------------+-----------= -------------------------------+

| FlowFiles_Received_Last_5_mins =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 | localhost=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 1438045869329= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 | NIFI =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |


Then I try to use this API call:

http://localhost:6188/ws/v1/timeline/metrics?metricNames=3DFlowFiles_Re= ceived_Last_5_mins&appId=3DNIFI&hostname=3Dlocalhost&startTime= =3D1437870332000&endTime=3D1438129532000

and I get:=C2=A0{"metrics":[]}

Something m= ust not be lining up with what I am sending over. Is SERVER_TIME the field = that has to fall between startTime and endTime?

-Bryan


On Mon, Jul 27, 2015 at 1:40 PM, Jaimin Jetly <jaimin@hort= onworks.com> wrote:

Hi Bryan,


There are 2 steps=C2=A0in this that needs to be achieved.


STEP-1:=C2=A0=C2=A0Exposing service=C2=A0metrics successfully through Am= bari REST=C2=A0API

STEP-2:=C2=A0 Ambari UI=C2=A0displaying widgets comprised from newly=C2= =A0exposed metrics via Ambari server.



As step-1 is pre-requisite to step-2, can you confirm that you were able= to achieve=C2=A0step-1 (exposing serv= ice=C2=A0metrics=C2=A0successfully through Ambari REST=C2=A0API) ?


NOTE:=C2=A0/var/lib/ambari-server/resources/common-services/AMBARI_METRIC= S/0.1.0/metrics.json are the metrics specific to Ambari metrics service.=C2=A0If the new metrics that you want to expose are related to any o= ther=C2=A0service then please edit/create metrics.json file in that specific service package and not in=C2=A0= Ambari metrics service package. widgets.json also needs to be changed/added= in the same service=C2=A0package and not at=C2=A0/var/lib/ambari-server/resources/stacks/HDP= /2.0.6/widgets.json (unless you want to add system heatmaps for a stack that inherits HDP-2.0.= 6 stack).=C2=A0



-- Thanks=C2=A0

=C2=A0 =C2=A0 Jaimin


From: Bryan Bende <bbende@gmail.com>
Sent: Sunday, July 26, 2015 2:10 PM

To: user= @ambari.apache.org
Subject: Re: Posting Metrics to Ambari
=C2=A0

Hi Sid,

Thanks for the pointers about how to add a metric to the UI. Based on th= ose instructions I modified /var/lib/ambari-server/resources/common-service= s/AMBARI_METRICS/0.1.0/metrics.json and added the following based on the te= st metrics I posted:

"metrics/SmokeTest/FakeMetric": {

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "metric": "= AMBARI_METRICS.SmokeTest.FakeMetric",

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "pointInTime": t= rue,

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "temporal": true=

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

From digging around the filesystem there appears to be a widgets.json in= /var/lib/ambari-server/resources/stacks/HDP/2.0.6/widgets.json. It looks l= ike this file only contained the definitions of the heatmaps, so I wasn'= ;t sure if this was the right place, but just to see what happened I modified it as follows:

1) Added a whole new layout:

http://pasteb= in.com/KqeT8xfe

2) Added a heatmap for the test metric:

http://pasteb= in.com/AQDT7u6v

Then I restarted the HDP VM but I don't see anything in the UI under= Metric Actions -> Add, or under Heatmaps. Anything that seems completel= y wrong about what I did? Maybe I should be going down the route of definin= g a new service type for system I will be sending metrics from?

Sorry to keep bothering with all these questions, I just don't have = any previous experience with Ambari.

Thanks,

Bryan


On Sun, Jul 26, 2015 at 12:10 AM, Siddharth Wagl= e <swagle@hort= onworks.com> wrote:

The AMS API does not allow open ended queries so startTime and endTime a= re required fields, the curl call should return the error code with the apt= response.


If this doesn't happen please go ahead and file a Jira.


Using AMS through Ambari UI after getting the plumbing work with metrics= .json completed would be much easier. The AMS API does need some refinement= . Jiras / Bugs are welcome.


-Sid




From: Siddharth Wagle <swagle@hortonworks.com>
Sent: Saturday, July 25, 2015 9:01 PM

To: user= @ambari.apache.org
Subject: Re: Posting Metrics to Ambari
=C2=A0

No dev work need only need to modify metrics.json file and then add widg= et from UI.


Stack details:

https://cwiki.apache.org/confluence/display/= AMBARI/Stack+Defined+Metrics


UI specifics:

https://cwiki.apache.org/confluence/dis= play/AMBARI/Enhanced+Service+Dashboard


-Sid



From: Bryan Bende <bbende@gmail.com>
Sent: Saturday, July 25, 2015 7:10 PM
To: user= @ambari.apache.org
Subject: Re: Posting Metrics to Ambari
=C2=A0
Quick update, I was able to connect with the phoenix 4.2.2= client and I did get results querying with:
SELECT * from METRIC_RECORD WHERE METRIC_NAME =3D 'AMBARI_METRICS.= SmokeTest.FakeMetric' order by SERVER_TIME desc limit 10;

Now that I know the metrics are posting, I am less concerned about que= rying through the REST API.=C2=A0

Is there any way to get a custom metric added to the main page of Amba= ri? or does this require development work?

Thanks,

Bryan=C2=A0

On Sat, Jul 25, 2015 at 9:42 PM, Bryan Bende <bbende@gmail.com<= /a>> wrote:
Hi Sid,

Thanks for the suggestions. I turned on DEBUG for the metrics collecto= r (had to do this through the Ambari UI configs section) and now I can see = some activity... When I post a metric I see:

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics] TimelineWebServices:270 - Storing metrics: {

=C2=A0 "metrics" : [ {

=C2=A0 =C2=A0 "timestamp" : 1432075898000,

=C2=A0 =C2=A0 "metricname" : "AMBARI_METRICS.SmokeT= est.FakeMetric",

=C2=A0 =C2=A0 "appid" : "amssmoketestfake",

=C2=A0 =C2=A0 "hostname" : "localhost",=

=C2=A0 =C2=A0 "starttime" : 1432075898000,

=C2=A0 =C2=A0 "metrics" : {

=C2=A0 =C2=A0 =C2=A0 "1432075898000" : 0.963781711428,

=C2=A0 =C2=A0 =C2=A0 "1432075899000" : 1.432075898E12

=C2=A0 =C2=A0 }

=C2=A0 } ]

}

01:30:18,372 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics] DefaultPhoenixDataSource:67 - Metric store connection url: jdbc:phoen= ix:localhost:61181:/hbase

01:30:18,376 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/metrics] = MutationState:361 - Sending 2 mutations for METRIC_RECORD with 8 key values= of total size 925 bytes

01:30:18,380 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics] MutationState:436 - Total time for batch call of=C2=A0 2 mutations in= to METRIC_RECORD: 3 ms

01:30:18,381 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics] log:40 - RESPONSE /ws/v1/timeline/metrics=C2=A0 200


So it looks like it posted successfully. Then I hit:

http://localhost:6188/ws/v1/t= imeline/metrics?metricNames=3DAMBARI_METRICS.SmokeTest.FakeMetric

and = I see...

01:31:16,952 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics?metricNames=3DAMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:4= 12 - Guideposts: ]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics?metricNames=3DAMBARI_METRICS.SmokeTest.FakeMetric] ParallelIterators:4= 81 - The parallelScans: [[{"timeRange":[0,1437874276946],"ba= tch":-1,"startRow":"AMBARI_METRICS.SmokeTest.FakeMetric= ","stopRow":"AMBARI_METRICS.SmokeTest.FakeMetric\\x01&q= uot;,"loadColumnFamiliesOnDemand":null,"totalColumns":1= ,"cacheBlocks":true,"families":{"0":["AL= L"]},"maxResultSize":-1,"maxVersions":1,"filt= er":"","caching":-1}]]

01:31:16,953 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics?metricNames=3DAMBARI_METRICS.SmokeTest.FakeMetric] BaseQueryPlan:243 -= Iterator ready: MergeSortTopNResultIterator [limit=3D1, count=3D0, orderBy= Columns=3D[METRIC_NAME DESC, SERVER_TIME DESC], ptr1=3D, ptr2=3D]

01:31:16,957 DEBUG [phoenix-1-thread-171] ParallelIterators:629 - = Id: d0c9c381-f35f-48e6-b970-8b6d5997684b, Time: 3ms, Scan: {"timeRange= ":[0,1437874276946],"batch":-1,"startRow":"AM= BARI_METRICS.SmokeTest.FakeMetric","stopRow":"AMBARI_ME= TRICS.SmokeTest.FakeMetric\\x01","loadColumnFamiliesOnDemand"= ;:null,"totalColumns":1,"cacheBlocks":true,"famili= es":{"0":["ALL"]},"maxResultSize":-1,&qu= ot;maxVersions":1,"filter":"","caching":= 10000}

01:31:16,959 DEBUG [95266635@qtp-171166092-2 - /ws/v1/timeline/met= rics?metricNames=3DAMBARI_METRICS.SmokeTest.FakeMetric] PhoenixHBaseAccesso= r:552 - Aggregate records size: 0

I'll see if I can get the phoenix client working and see what that r= eturns.

Thanks,

Bryan


On Fri, Jul 24, 2015 at 5:44 PM, Siddharth Wagle= <swagle@hort= onworks.com> wrote:

Hi Bryan,


Few things you can do:


1. Turn on DEBUG mode by changing log4j.properties at, /etc/ambari-metri= cs-collector/conf/

This might reveal more info, I don't think we print every metrics re= ceived to the log in 2.0 or 2.1, I did add this option if TRACE is enabled = to trunk recently.


2. Connect using Phoenix directly and you can do a SELECT query like thi= s:

SELECT * from METRIC_RECORD WHERE METRIC_NAME =3D '<your-metric-n= ame>' order by SERVER_TIME desc limit 10;


Instructions for connecting to Phoenix:

https://cwiki.apache.org/confluence/display/AMBARI/= Phoenix+Schema


3. What API call are you making to get metrics?

E.g.: http://<ams-collector>:6188/ws/v1/timeline/metrics?metricNam= es=3D<your-metric-name>&startTime=3D<epoch>&endTime=3D&= lt;epoch>&hostname=3D<hostname>


-Sid



From: Bryan Bende <bbende@gmail.com>
Sent: Friday, July 24, 2015 2:03 PM
To: user= @ambari.apache.org
Subject: Posting Metrics to Ambari
=C2=A0
I'm interested in sending metrics to Ambari and I'= ve been looking at the Metrics Collector REST API described here:

I figured the easiest way to test it would be to get the latest HDP Sa= ndbox... so I downloaded and started it up. The Metrics Collector service w= asn't running so I started it, and also added port 6188 to the VM port = forwarding. From there I used the example POST on the Wiki page and made a successful POST which got a 200 response.= After that I tried the query, but could never get any results to come back= .

I know this list is not specific to HDP, but I was wondering if anyone= has any suggestions as to what I can look at to figure out what is happeni= ng with the data I am posting.=C2=A0

I was watching the metrics collector log while posting and querying an= d didn't see any activity besides the periodic aggregation.

Any suggestions would be greatly appreciated.

Thanks,

Bran








--20cf307d004c8ccaf2051c092f90--