phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Taylor (JIRA)" <>
Subject [jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock
Date Thu, 26 May 2016 05:10:12 GMT


James Taylor commented on PHOENIX-2940:

To expand on what Josh said, a simple solution since stats are asynchronously generated at
an infrequent interval:
- do not cache stats on the server side at all by removing this block of code from MetaDataEndPointImpl.getTable():
        if (tenantId == null) {
            HTableInterface statsHTable = null;
            try {
                statsHTable = ServerUtil.getHTableForCoprocessorScan(env,
                stats = StatisticsUtil.readStatistics(statsHTable, physicalTableName.getBytes(),
                timeStamp = Math.max(timeStamp, stats.getTimestamp());
            } catch (org.apache.hadoop.hbase.TableNotFoundException e) {
                        env.getConfiguration()) + " not online yet?");
            } finally {
                if (statsHTable != null) statsHTable.close();
- Introduce a scheduled timer in ConnectionQueryServicesImpl that queries the SYSTEM.STATS
table through the StatisticsUtil.readStatistics() call at the already existing {{QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB}}
config param frequency. And add this to a new LRU cache using {{}}
keyed by PTable.getKey().
- In BaseResultIterators.getGuidePosts(), get the guideposts from the new cache instead of
from the PTable. We can have the stats fault-in when not found. 
- Eventually (or maybe even now?), we can remove the stats field from the PTable protobuf.

> Remove STATS RPCs from rowlock
> ------------------------------
>                 Key: PHOENIX-2940
>                 URL:
>             Project: Phoenix
>          Issue Type: Improvement
>         Environment: HDP 2.3 + Apache Phoenix 4.6.0
>            Reporter: Nick Dimiduk
>            Assignee: Josh Elser
> We have an unfortunate situation wherein we potentially execute many RPCs while holding
a row lock. This is problem is discussed in detail on the user list thread ["Write path blocked
by MetaDataEndpoint acquiring region lock"|].
During some situations, the [MetaDataEndpoint|]
coprocessor will attempt to refresh it's view of the schema definitions and statistics. This
involves [taking a rowlock|],
executing a scan against the [local region|],
and then a scan against a [potentially remote|]
statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps (in my case,
the use of the ROW_TIMESTAMP feature, or perhaps as in PHOENIX-2607). When combined with other
issues (PHOENIX-2939), we end up with total gridlock in our handler threads -- everyone queued
behind the rowlock, scanning and rescanning SYSTEM.STATS. Because this happens in the MetaDataEndpoint,
the means by which all clients refresh their knowledge of schema, gridlock in that RS can
effectively stop all forward progress on the cluster.

This message was sent by Atlassian JIRA

View raw message