phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Elser (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock
Date Thu, 16 Jun 2016 18:44:05 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334419#comment-15334419
] 

Josh Elser commented on PHOENIX-2940:
-------------------------------------

bq. Not sure I understand this new logic, as it seems like we don't need the if (PTableStats.EMPTY_STATS
== stats) block. Before, we were relying on the PTable meta data cache to find stats, but
now we're relying on this new cache plus querying directly if not found. So clearing the table
from the cache won't have any impact on stats.

Clearing the table from the cache also does invalidate any stats for that table from the cache
as well. We could expose a method to directly clear the stats cache to QueryServices which
would be a little more direct. I can add a comment to clarify at the very minimum.

bq. One nit - remove this comment since this is now the way stats are always read (where as
before they came over attached to the PTable):

Will do. I didn't think critically about the comment :)

bq. Second nit - looks like the only call to TableStatsCache.put() is for testing through
ConnectionlessQueryServerImpl which makes sense. Would be good to make a comment in the javadoc
(wow, javadoc - thanks!), that it's exposed for testing.

Sure thing.

bq. Interested in the perf numbers and would be great if Nick Dimiduk could take it for a
spin too.

That's my last step that I want to take. I got a clean run of the ITs earlier this week with
the latest patch. I talked to Nick offline yesterday and he said that he had a very tough
time initially testing the changes, so I'm not super confident about being able to understand
the original contention with server-side stats collection. I think I'm just going to try to
run some before/after numbers of make sure performance is still as expected.

> Remove STATS RPCs from rowlock
> ------------------------------
>
>                 Key: PHOENIX-2940
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2940
>             Project: Phoenix
>          Issue Type: Improvement
>         Environment: HDP 2.3 + Apache Phoenix 4.6.0
>            Reporter: Nick Dimiduk
>            Assignee: Josh Elser
>             Fix For: 4.8.0
>
>         Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, PHOENIX-2940.003.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs while holding
a row lock. This is problem is discussed in detail on the user list thread ["Write path blocked
by MetaDataEndpoint acquiring region lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1&subj=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
During some situations, the [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
coprocessor will attempt to refresh it's view of the schema definitions and statistics. This
involves [taking a rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
executing a scan against the [local region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
and then a scan against a [potentially remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps (in my case,
the use of the ROW_TIMESTAMP feature, or perhaps as in PHOENIX-2607). When combined with other
issues (PHOENIX-2939), we end up with total gridlock in our handler threads -- everyone queued
behind the rowlock, scanning and rescanning SYSTEM.STATS. Because this happens in the MetaDataEndpoint,
the means by which all clients refresh their knowledge of schema, gridlock in that RS can
effectively stop all forward progress on the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message