cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Stupp (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-10050) Secondary Index Performance Dependent on TokenRange Searched in Analytics
Date Sat, 20 Aug 2016 12:28:20 GMT


Robert Stupp commented on CASSANDRA-10050:

As far as I understand this ticket, it would require at least a schema change. Changing the
order of the keys would require that the token is encoded in the internal 2i tables. I.e.
instead of _partition-key=index-value + clustering-key=base-primary-key_ it would be _partition-key=index-value
+ clustering-key=token-of-base-partition-key+base-primary-key_ ; also, "token" would depend
on the partitioner. Quite irrelevant for BOP, but random and murmur3 partitioner would require
the token to be included.
Not sure, whether the current implementation (i.e. returning the rows in key order) is something
that people rely on. Means, would it be worth to add an option to internal 2i that says: _store
in token order_?

> Secondary Index Performance Dependent on TokenRange Searched in Analytics
> -------------------------------------------------------------------------
>                 Key: CASSANDRA-10050
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>         Environment: Single node, macbook, 2.1.8
>            Reporter: Russell Spitzer
>             Fix For: 3.x
> In doing some test work on the Spark Cassandra Connector I saw some odd performance when
pushing down range queries with Secondary Index filters. When running the queries we see huge
amount of time when the C* server is not doing any work and the query seem to be hanging.
This investigation led to the work in this document
> The Spark Cassandra Connector builds up token range specific queries and allows the user
to pushdown relevant fields to C*. Here we have two indexed fields (size) and (color) being
pushed down to C*. 
> {code}
> SELECT count(*) FROM WHERE token("store") > $min AND token("store") <= $max
AND color = 'red' AND size = 'P' ALLOW FILTERING;{code}
> These queries will have different token ranges inserted and executed as separate spark
tasks. Spark tasks with token ranges near the Min(token) end up executing much faster than
those near Max(token) which also happen to through errors.
> {code}
> Coordinator node timed out waiting for replica nodes' responses] message="Operation timed
out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1,
'consistency': 'ONE'}
> {code}
> I took the queries and ran them through CQLSH to see the difference in time. A linear
relationship is seen based on where the tokenRange being queried is starting with only 2 second
for queries near the beginning of the full token spectrum and over 12 seconds at the end of
the spectrum. 
> The question is, can this behavior be improved? or should we not recommend using secondary
indexes with Analytics workloads?

This message was sent by Atlassian JIRA

View raw message