Return-Path: X-Original-To: apmail-ambari-user-archive@www.apache.org Delivered-To: apmail-ambari-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9571318AAA for ; Wed, 21 Oct 2015 17:20:45 +0000 (UTC) Received: (qmail 35691 invoked by uid 500); 21 Oct 2015 17:20:45 -0000 Delivered-To: apmail-ambari-user-archive@ambari.apache.org Received: (qmail 35661 invoked by uid 500); 21 Oct 2015 17:20:45 -0000 Mailing-List: contact user-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ambari.apache.org Delivered-To: mailing list user@ambari.apache.org Received: (qmail 35650 invoked by uid 99); 21 Oct 2015 17:20:45 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Oct 2015 17:20:45 +0000 Received: from [192.168.1.25] (c-50-148-128-52.hsd1.ca.comcast.net [50.148.128.52]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id C86B31A0181; Wed, 21 Oct 2015 17:20:44 +0000 (UTC) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Ambari Metrics From: Hitesh Shah In-Reply-To: <1445392257603.42713@hortonworks.com> Date: Wed, 21 Oct 2015 10:20:43 -0700 Cc: "smlynarczyk@prognosive.com" , Daryl Heinz Content-Transfer-Encoding: quoted-printable Message-Id: References: <562545A0.7030208@prognosive.com> <1445392257603.42713@hortonworks.com> To: user@ambari.apache.org X-Mailer: Apple Mail (2.1878.6) @Siddharth, "17:29:40,698 WARN [main] NativeCodeLoader:62 - Unable to load = native-hadoop library for your platform... using builtin-java classes = where applicable=93 The above message is usually meant to be harmless as it is warning about = the use of non-performant java implementations instead of using native = code paths. Could you explain why this would affect the functionality? = Does this mean that one would never be able to deploy/run AMS on a Mac = because hadoop never has had any native libs built for Darwin?=20 thanks =97 Hitesh =20 =20 On Oct 20, 2015, at 6:50 PM, Siddharth Wagle = wrote: > Hi Stan, >=20 > Based on the col.txt attached, the real problem is: >=20 > 17:29:40,698 WARN [main] NativeCodeLoader:62 - Unable to load = native-hadoop library for your platform... using builtin-java classes = where applicable >=20 > This would mean incorrect binaries installed for AMS. Possibly wrong = repo url used to install the components. > Can you please provide the ambari.repo URL used to install the service = and the version and flavor of the OS on which Metrics Collector is = installed? >=20 > The hb.txt, looks like a clean log file. >=20 > Here is a link to all info that is useful for debugging: > = https://cwiki.apache.org/confluence/display/AMBARI/Troubleshooting+Guide >=20 > Best Regards, > Sid >=20 >=20 > From: smlynarczyk@prognosive.com > Sent: Monday, October 19, 2015 12:33 PM > To: Siddharth Wagle > Cc: Daryl Heinz > Subject: Ambari Metrics > =20 > Hello Siddharth, >=20 > I am hoping to get your input on an issue that has arisen with the = Ambari Metrics Collector. This is with ambari 2.0.1 an HDP 2.2.6. = The error message received was: > Caused by: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all = table regions >=20 > Caused by: java.io.IOException: HRegionInfo was null in hbase:meta >=20 > ------- CUT partial collector log ----- >=20 > 11:13:35,203 WARN [main] = ConnectionManager$HConnectionImplementation:1228 - Encountered problems = when prefetch hbase:meta table: > java.io.IOException: HRegionInfo was null or empty in Meta for = SYSTEM.CATALOG, row=3DSYSTEM.CATALOG,,99999999999999 > at = org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:170) > at = org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation= .prefetchRegionCache(ConnectionManager.java:1222) > at = org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation= .locateRegionInMeta(ConnectionManager.java:1286) > at = org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation= .locateRegion(ConnectionManager.java:1135) > at = org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation= .locateRegion(ConnectionManager.java:1118) > at = org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation= .locateRegion(ConnectionManager.java:1075) > at = org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation= .getRegionLocation(ConnectionManager.java:909) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(Co= nnectionQueryServicesImpl.java:401) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerComp= atibility(ConnectionQueryServicesImpl.java:853) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(Co= nnectionQueryServicesImpl.java:797) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(Connectio= nQueryServicesImpl.java:1107) > at = org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(Deleg= ateConnectionQueryServices.java:110) > at = org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClien= t.java:1527) > at = org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:5= 35) > at = org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompil= er.java:184) > at = org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:260)= > at = org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)= > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at = org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.= java:250) > at = org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.ja= va:1026) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQuer= yServicesImpl.java:1532) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl$9.call(ConnectionQuer= yServicesImpl.java:1501) > at = org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor= .java:77) > at = org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryS= ervicesImpl.java:1501) > at = org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDr= iver.java:162) > at = org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDrive= r.java:126) > at = org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133) > at = java.sql.DriverManager.getConnection(DriverManager.java:571) > at = java.sql.DriverManager.getConnection(DriverManager.java:233) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.D= efaultPhoenixDataSource.getConnection(DefaultPhoenixDataSource.java:69) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.P= hoenixHBaseAccessor.getConnection(PhoenixHBaseAccessor.java:149) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.P= hoenixHBaseAccessor.getConnectionRetryingOnException(PhoenixHBaseAccessor.= java:127) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.P= hoenixHBaseAccessor.initMetricSchema(PhoenixHBaseAccessor.java:268) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.H= BaseTimelineMetricStore.initializeSubsystem(HBaseTimelineMetricStore.java:= 64) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.H= BaseTimelineMetricStore.serviceInit(HBaseTimelineMetricStore.java:58) > at = org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) > at = org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.ja= va:107) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistory= Server.serviceInit(ApplicationHistoryServer.java:84) > at = org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistory= Server.launchAppHistoryServer(ApplicationHistoryServer.java:137) > at = org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistory= Server.main(ApplicationHistoryServer.java:147) >=20 >=20 > The (partial) contents of the embedded hbase and collector logs are = in the attached. Any light that you could shed on this would be = appreciated. The incident I believe started after an upgrade on July = 20th at 17:29 pm >=20 >=20 > Thanks in advance, >=20 > Stan >=20 > --=20 >=20 > --=20 > Ad Altiora Tendo >=20 > Stanley J. Mlynarczyk - Ph.D. > Chief Technology Officer >=20 > >=20 > Mobile: +1 630-607-2223 >=20