ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Siddharth Wagle <swa...@hortonworks.com>
Subject Re: Can't get Ambari Metrics to write to HDP 2.2.4 HBase
Date Thu, 09 Jul 2015 23:57:12 GMT
Inline.

________________________________
From: Ken Barclay <kbarclay@ancestry.com>
Sent: Thursday, July 09, 2015 4:46 PM
To: Siddharth Wagle; Sumit Mohanty
Subject: Re: Can't get Ambari Metrics to write to HDP 2.2.4 HBase

Hi Sid,

Thanks, I can install Phoenix and see how far I get.

Could I understand more about the role HBase is playing in this scenario? Is it the case that
the Collector needs to create the table schema and so on in HBase, but it’s not used in
any way once the system is running? And that the Collector will take some path I specify in
HDFS and use that to write out the data? I wanted to use HBase because it’s fast: will HDFS
be able to keep up if I’m monitoring hundreds of nodes?

AMS data is written to Phoenix Tables in HBASE, in "embedded mode" HBase is configured in
standalone mode and HBase writes to the local FS.

In "distributed" mode HBase is configured to write to cluster's HDFS. (Note: In both cases,
I am talking about ams-hbase, the one that Ambari Metrics starts)

In your case you will be utilizing cluster's HBase writing to HDFS, what I meant was the ams-hbase
will still run because Ambari will start it, just not utilized for anything.

I’m just looking for a fast scalable replacement for Ganglia/Nagios that speaks HDP and
does alerting so I don’t have write all my own HDP alerts in Bosun/TSDB. (I’m collecting
plenty of HDP metrics currently, but only have a handful of alerts set up.)

Metric alerts will be a part of 2.2 and are supported right not for all JMX metrics (point
in time, more suited to alerts)

In Ambari 2.1, you can add Graphs and visualize any of the metrics collected by the backend.
(https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard)

If you are monitoring 1 cluster, you do not have to setup TSDB to visualize data with Ambari
2.1

Do you know when the external storage option might be ready?

Not in 2.1, I have a Epic open for 2.2, feel free to leave a comment on the Jira, we should
get it by 2.2

Thanks for your help
Ken

From: Siddharth Wagle <swagle@hortonworks.com<mailto:swagle@hortonworks.com>>
Date: Thursday, July 9, 2015 at 4:09 PM
To: Sumit Mohanty <smohanty@hortonworks.com<mailto:smohanty@hortonworks.com>>
Cc: Ken Barclay <kbarclay@ancestry.com<mailto:kbarclay@ancestry.com>>
Subject: Re: Can't get Ambari Metrics to write to HDP 2.2.4 HBase


Hi Ken,


What you are looking for : "Ambari to use the existing HBase as its datastore for metrics",
is not the distributed mode.

We envision this to be "external" mode for Ambari Metrics Service where user can point to
own datastore. It is not currently a supported feature.


"distributed" mode means write to HDFS and do not run HBASE in standalone mode.


Since you are down the path of hacking around this, let me give you some pointers. (Theoretically,
it should be possible to get this to work)


The Metric Collector runs two processes, 1. embedded HBase and 2. The Collector API layer.

Process 2 uses the Phoenix client to create the table schema in HBase.

So you need Phoenix 4.2.0 enabled on your HBase.

The Collector API daemon will look at "hbase.zookeeper.quorum" inside /etc/ambari-metrics-collector/conf/hbase-site,
and try to connect to it using jdbc URL.


Note: Ambari will still start the HBase daemon since external mode is not supported yet. If
you get everything to work you may just tune down memory settings for the embedded HBase and
let it run.


Wiki pages should help you with finding out what goes where :

https://cwiki.apache.org/confluence/display/AMBARI/Operations


-Sid


________________________________
From: Sumit Mohanty
Sent: Thursday, July 09, 2015 2:30 PM
To: Siddharth Wagle
Subject: Re: Can't get Ambari Metrics to write to HDP 2.2.4 HBase


FYI​

________________________________
From: Ken Barclay <kbarclay@ancestry.com<mailto:kbarclay@ancestry.com>>
Sent: Thursday, July 09, 2015 2:27 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Can't get Ambari Metrics to write to HDP 2.2.4 HBase

Hello,

I’m trying to get Ambari 2.0.1 Monitoring running in our 4-node test cluster in a Centos
6 environment. We already have monitoring data coming from OpenTSDB (via Bosun) that we’re
storing in HBase, and I want Ambari to use the existing HBase as its datastore also.

Ambari seems to want to start its own HBase Master, RegionServers, etc, even when I try to
use ‘distributed’ mode.

I’ve put the Metrics Service operation mode to ‘distributed’, changed ams-hbase-site
in Ambari to have the right ZK quorum and hbase.master.info.bindAddress, and made all the
ports the same as for our existing HBase. Went down quite the rabbit hole changing znode.parent
from /hbase to /hbase-unsecure.

On server startup I see the message

15:05:31,160  INFO [main] ZooKeeperRegistry:108 - ClusterId read in ZooKeeper is null

Followed by

12:32:47,282  INFO [main] ConnectionManager$HConnectionImplementation:1613 - getMaster attempt
25 of 35 failed; retrying after sleep of 20023, exception=java.io.IOException: Can't get master
address from ZooKeeper; znode data == null

Is this a fixable situation? I heard something about Ambari’s distributed storage being
in ‘technical preview’ but it would be great to get this working.

Thanks
Ken
Mime
View raw message