ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Siddharth Wagle <>
Subject Re: Ambari Metrics
Date Wed, 21 Oct 2015 18:36:42 GMT
Hi Stan,

Do not worry about the Mac comment below. It was only to suggest the workaround for incompatible
native binaries, example using centos6 repo to install AMS on SLES machine, etc.

If you can provide the hbase-ams-master-<host>.log and ambari-metrics-collector.log
files, I can provide more info. Also, the configs from:

/etc/ams-hbase/conf and /etc/ambari-metrics-collector/conf

- Sid

From: <>
Sent: Wednesday, October 21, 2015 10:38 AM
To: Siddharth Wagle;
Cc: Daryl Heinz
Subject: Re: Ambari Metrics

Hello Sid,

I checked both the cluster with the issue and another of our clusters that is working fine
but that is a later version of Ambari (2.1).  Both have SNAPPY as compression.

Sid, not sure I am understanding the comment below about "MAC".  The cluster is a 48 node
Dell Node system.

In your prior email you suggested checking the Yum and rpm repositories along with OS version
and I am still doing this and should have this shortly.



Ad Altiora Tendo

Stanley J. Mlynarczyk - Ph.D.
Chief Technology Officer
Mobile: +1 630-607-2223

On 10/21/15 12:26 PM, Siddharth Wagle wrote:

AMS uses SNAPPY compression by default. So the service would start up fine but fail when Phoenix
tried to CREATE TABLE.

The work around is to set the compression code property in ams-site to "NONE" instead of SNAPPY.
So, it will work on the MAC just not with compression enabled.

- Sid
From: Hitesh Shah <><>
Sent: Wednesday, October 21, 2015 10:20 AM
Cc:<>; Daryl Heinz
Subject: Re: Ambari Metrics


"17:29:40,698  WARN [main] NativeCodeLoader:62 - Unable to load native-hadoop library for
your platform... using builtin-java classes where applicable“

The above message is usually meant to be harmless as it is warning about the use of non-performant
java implementations instead of using native code paths. Could you explain why this would
affect the functionality? Does this mean that one would never be able to deploy/run AMS on
a Mac because hadoop never has had any native libs built for Darwin?

— Hitesh

On Oct 20, 2015, at 6:50 PM, Siddharth Wagle <><>

Hi Stan,

Based on the col.txt attached, the real problem is:

17:29:40,698  WARN [main] NativeCodeLoader:62 - Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable

This would mean incorrect binaries installed for AMS. Possibly wrong repo url used to install
the components.
Can you please provide the ambari.repo URL used to install the service and the version and
flavor of the OS on which Metrics Collector is installed?

The hb.txt, looks like a clean log file.

Here is a link to all info that is useful for debugging:

Best Regards,

From:<> <><>
Sent: Monday, October 19, 2015 12:33 PM
To: Siddharth Wagle
Cc: Daryl Heinz
Subject: Ambari Metrics

Hello Siddharth,

I am hoping to get your input on an issue that has arisen with the Ambari Metrics Collector.
   This is with ambari 2.0.1 an HDP 2.2.6.  The error message received was:
Caused by: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions

Caused by: HRegionInfo was null in hbase:meta

------- CUT partial collector log -----

11:13:35,203  WARN [main] ConnectionManager$HConnectionImplementation:1228 - Encountered problems
when prefetch hbase:meta table: HRegionInfo was null or empty in Meta for SYSTEM.CATALOG, row=SYSTEM.CATALOG,,99999999999999
        at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(
        at org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(
        at org.apache.phoenix.schema.MetaDataClient.createTableInternal(
        at org.apache.phoenix.schema.MetaDataClient.createTable(
        at org.apache.phoenix.compile.CreateTableCompiler$2.execute(
        at org.apache.phoenix.jdbc.PhoenixStatement$
        at org.apache.phoenix.jdbc.PhoenixStatement$
        at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(
        at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$
        at org.apache.phoenix.query.ConnectionQueryServicesImpl$
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(
        at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(
        at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(
        at org.apache.phoenix.jdbc.PhoenixDriver.connect(
        at java.sql.DriverManager.getConnection(
        at java.sql.DriverManager.getConnection(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.DefaultPhoenixDataSource.getConnection(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.getConnection(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.getConnectionRetryingOnException(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.initMetricSchema(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricStore.initializeSubsystem(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricStore.serviceInit(
        at org.apache.hadoop.service.AbstractService.init(
        at org.apache.hadoop.service.CompositeService.serviceInit(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.serviceInit(
        at org.apache.hadoop.service.AbstractService.init(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.launchAppHistoryServer(
        at org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.main(

The (partial)  contents of the embedded hbase and collector logs are in the attached.  Any
light that you could shed on this would be appreciated.  The incident I believe started after
an upgrade on July 20th at 17:29 pm

Thanks in advance,



Ad Altiora Tendo

Stanley J. Mlynarczyk - Ph.D.
Chief Technology Officer


Mobile: +1 630-607-2223

View raw message