ambari-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chuan Jin (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMBARI-20392) Get aggregate metric records from HBase encounters performance issues
Date Fri, 10 Mar 2017 11:19:04 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-20392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chuan Jin updated AMBARI-20392:
-------------------------------
    Description: 
I have a mini cluster ( ~6 nodes)  managed by Ambari, and use a distributed HBase (~3 nodes)
to hold  metrics collected from these nodes.  After I deploy YARN serivce, then I notice that
 some widgets (Cluster Memory,Cluster Disk,...)  cannot  display properly in the YARN service
dashboard page.  And Ambari Server has continuous timeout exceptions, which complains that
it doesn't get timeline metrics for connection refused.

I see the correspond request is like this:
/api/v1/clusters/bj_cluster1/services/YARN/components/NODEMANAGER?fields=metrics/yarn/ContainersFailed._rate[1489113738,1489117338,15],metrics/yarn/ContainersCompleted._rate[1489113738,1489117338,15],metrics/yarn/ContainersLaunched._rate[1489113738,1489117338,15],metrics/yarn/ContainersIniting._sum[1489113738,1489117338,15],metrics/yarn/ContainersKilled._rate[1489113738,1489117338,15],metrics/yarn/ContainersRunning._sum[1489113738,1489117338,15],metrics/memory/mem_total._avg[1489113738,1489117338,15],metrics/memory/mem_free._avg[1489113738,1489117338,15],metrics/disk/read_bps._sum[1489113738,1489117338,15],metrics/disk/write_bps._sum[1489113738,1489117338,15],metrics/network/pkts_in._avg[1489113738,1489117338,15],metrics/network/pkts_out._avg[1489113738,1489117338,15],metrics/cpu/cpu_system._sum[1489113738,1489117338,15],metrics/cpu/cpu_user._sum[1489113738,1489117338,15],metrics/cpu/cpu_nice._sum[1489113738,1489117338,15],metrics/cpu/cpu_idle._sum[1489113738,1489117338,15],metrics/cpu/cpu_wio._sum[1489113738,1489117338,15]&format=null_padding&_=1489117333815

In the AMS collector, this request is transformed to a query (not the same request):
2017-03-10 16:03:56,178 DEBUG [1537616305@qtp-1324937403-125 - /ws/v1/timeline/metrics?metricNames=cpu_idle._sum%2Cyarn.NodeManagerMetrics.ContainersCompleted._rate%2Cmem_free._avg%2Cpkts_in._avg%2Cyarn.NodeManagerMetrics.ContainersLaunched._rate%2Cyarn.NodeManagerMetrics.ContainersKilled._rate%2Ccpu_wio._sum%2Cyarn.NodeManagerMetrics.ContainersIniting._sum%2Ccpu_system._sum%2Ccpu_user._sum%2Ccpu_nice._sum%2Cyarn.NodeManagerMetrics.ContainersFailed._rate%2Cmem_total._avg%2Cpkts_out._avg%2Cyarn.NodeManagerMetrics.ContainersRunning._sum&appId=NODEMANAGER&startTime=1489129435&endTime=1489133035]
PhoenixTransactSQL:682 - SQL => SELECT /*+ NATIVE_TIME_RANGE(1489129315000) */ METRIC_NAME,
APP_ID, INSTANCE_ID, SERVER_TIME, UNITS, METRIC_SUM, HOSTS_COUNT, METRIC_MAX, METRIC_MIN FROM
METRIC_AGGREGATE WHERE (METRIC_NAME IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)) AND
APP_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ? ORDER BY METRIC_NAME, SERVER_TIME
LIMIT 15840, condition => Condition{metricNames=[pkts_out, cpu_wio, cpu_idle, yarn.NodeManagerMetrics.ContainersCompleted,
mem_total, cpu_nice, yarn.NodeManagerMetrics.ContainersRunning, pkts_in, yarn.NodeManagerMetrics.ContainersFailed,
yarn.NodeManagerMetrics.ContainersLaunched, mem_free, cpu_user, yarn.NodeManagerMetrics.ContainersKilled,
yarn.NodeManagerMetrics.ContainersIniting, cpu_system], hostnames='null', appId='NODEMANAGER',
instanceId='null', startTime=1489129435, endTime=1489133035, limit=null, grouped=true, orderBy=[],
noLimit=false}

The request timeout parameter is 5s, which means the query of getting metrics from HBase takes
more time than that. Then I use Phoenix shell to login and perform the same query in the HBase
, and it takes nearly 30s to finish.  But If I split the big query into small pieces , i mean,
use less values in the "metric_name" field in the where ... in clause , then the result return
in 1s after several small queries.  

The query performance in HBase is highly based on the design of rowkey and the proper usage
for it.  In the method of getting aggregate metrics,  AMS collector query the METRIC_AGGREGATE
 table in a way that may cause the co-processor to scan several regions across different RS.
If we add more metrics in the service dashboard, this situation will be worse.

  was:
I have a mini cluster ( ~6 nodes)  managed by Ambari, and use a distributed HBase (~3 nodes)
to hold  metrics collected from these nodes.  After I deploy YARN serivce, then I notice that
 some widgets (Cluster Memory,Cluster Disk,...)  cannot  display properly in the YARN service
dashboard page.  And Ambari Server has continuous timeout exceptions, which complains that
it doesn't get timeline metrics for connection refused.

The request timeout parameter is 5s, which means the query of getting metrics from HBase takes
more time than that. Then I use Phoenix shell to login and perform the same query in the HBase
, and it takes nearly 30s to finish.  But If I split the big query into small pieces , i mean,
use less values in the "metric_name" field in the where ... in clause , then the result return
in 1s after several small queries.  

The query performance in HBase is highly based on the design of rowkey and the proper usage
for it.  In the method of getting aggregate metrics,  AMS collector query the METRIC_AGGREGATE
 table in a way that may cause the co-processor to scan several regions across different RS.
If we add more metrics in the service dashboard, this situation will be worse.


> Get aggregate metric records from HBase encounters performance issues
> ---------------------------------------------------------------------
>
>                 Key: AMBARI-20392
>                 URL: https://issues.apache.org/jira/browse/AMBARI-20392
>             Project: Ambari
>          Issue Type: Improvement
>          Components: ambari-metrics
>    Affects Versions: 2.4.2
>            Reporter: Chuan Jin
>
> I have a mini cluster ( ~6 nodes)  managed by Ambari, and use a distributed HBase (~3
nodes) to hold  metrics collected from these nodes.  After I deploy YARN serivce, then I notice
that  some widgets (Cluster Memory,Cluster Disk,...)  cannot  display properly in the YARN
service dashboard page.  And Ambari Server has continuous timeout exceptions, which complains
that it doesn't get timeline metrics for connection refused.
> I see the correspond request is like this:
> /api/v1/clusters/bj_cluster1/services/YARN/components/NODEMANAGER?fields=metrics/yarn/ContainersFailed._rate[1489113738,1489117338,15],metrics/yarn/ContainersCompleted._rate[1489113738,1489117338,15],metrics/yarn/ContainersLaunched._rate[1489113738,1489117338,15],metrics/yarn/ContainersIniting._sum[1489113738,1489117338,15],metrics/yarn/ContainersKilled._rate[1489113738,1489117338,15],metrics/yarn/ContainersRunning._sum[1489113738,1489117338,15],metrics/memory/mem_total._avg[1489113738,1489117338,15],metrics/memory/mem_free._avg[1489113738,1489117338,15],metrics/disk/read_bps._sum[1489113738,1489117338,15],metrics/disk/write_bps._sum[1489113738,1489117338,15],metrics/network/pkts_in._avg[1489113738,1489117338,15],metrics/network/pkts_out._avg[1489113738,1489117338,15],metrics/cpu/cpu_system._sum[1489113738,1489117338,15],metrics/cpu/cpu_user._sum[1489113738,1489117338,15],metrics/cpu/cpu_nice._sum[1489113738,1489117338,15],metrics/cpu/cpu_idle._sum[1489113738,1489117338,15],metrics/cpu/cpu_wio._sum[1489113738,1489117338,15]&format=null_padding&_=1489117333815
> In the AMS collector, this request is transformed to a query (not the same request):
> 2017-03-10 16:03:56,178 DEBUG [1537616305@qtp-1324937403-125 - /ws/v1/timeline/metrics?metricNames=cpu_idle._sum%2Cyarn.NodeManagerMetrics.ContainersCompleted._rate%2Cmem_free._avg%2Cpkts_in._avg%2Cyarn.NodeManagerMetrics.ContainersLaunched._rate%2Cyarn.NodeManagerMetrics.ContainersKilled._rate%2Ccpu_wio._sum%2Cyarn.NodeManagerMetrics.ContainersIniting._sum%2Ccpu_system._sum%2Ccpu_user._sum%2Ccpu_nice._sum%2Cyarn.NodeManagerMetrics.ContainersFailed._rate%2Cmem_total._avg%2Cpkts_out._avg%2Cyarn.NodeManagerMetrics.ContainersRunning._sum&appId=NODEMANAGER&startTime=1489129435&endTime=1489133035]
PhoenixTransactSQL:682 - SQL => SELECT /*+ NATIVE_TIME_RANGE(1489129315000) */ METRIC_NAME,
APP_ID, INSTANCE_ID, SERVER_TIME, UNITS, METRIC_SUM, HOSTS_COUNT, METRIC_MAX, METRIC_MIN FROM
METRIC_AGGREGATE WHERE (METRIC_NAME IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)) AND
APP_ID = ? AND SERVER_TIME >= ? AND SERVER_TIME < ? ORDER BY METRIC_NAME, SERVER_TIME
LIMIT 15840, condition => Condition{metricNames=[pkts_out, cpu_wio, cpu_idle, yarn.NodeManagerMetrics.ContainersCompleted,
mem_total, cpu_nice, yarn.NodeManagerMetrics.ContainersRunning, pkts_in, yarn.NodeManagerMetrics.ContainersFailed,
yarn.NodeManagerMetrics.ContainersLaunched, mem_free, cpu_user, yarn.NodeManagerMetrics.ContainersKilled,
yarn.NodeManagerMetrics.ContainersIniting, cpu_system], hostnames='null', appId='NODEMANAGER',
instanceId='null', startTime=1489129435, endTime=1489133035, limit=null, grouped=true, orderBy=[],
noLimit=false}
> The request timeout parameter is 5s, which means the query of getting metrics from HBase
takes more time than that. Then I use Phoenix shell to login and perform the same query in
the HBase , and it takes nearly 30s to finish.  But If I split the big query into small pieces
, i mean, use less values in the "metric_name" field in the where ... in clause , then the
result return in 1s after several small queries.  
> The query performance in HBase is highly based on the design of rowkey and the proper
usage for it.  In the method of getting aggregate metrics,  AMS collector query the METRIC_AGGREGATE
 table in a way that may cause the co-processor to scan several regions across different RS.
If we add more metrics in the service dashboard, this situation will be worse.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message