cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sruti S <sruti.shivaku...@gmail.com>
Subject Re: Getting all unique keys
Date Fri, 18 Aug 2017 13:28:30 GMT
hi:

Is this sensor data, hence timestamp? Ho w are you generating this 'key'
field?Can you have only the 'key' field as primary key? Even if not, since
that field is a part of the PK may make such queries fast.

However,  are there other attributes thst can be added that define unique
business keys? If so you can make those attributes primary keys so an index
is used.
It would help to understand why the list if distinct keys is needed, and at
this constant query rate.

Do you need to ensure the same key is not reused? If so, make your key
field a PK or add a uniqueness constraint, so duplicate inserts will fail.

If you *must* query constantly, u can also consider using a secondary index
to help speed up.
HTH.
Look forward to further clarification.


Friday, August 18, 2017, Avi Levi <avi@indeni.com> wrote:

> Hi
>
> what is the most efficient way to get a distinct key list from a big table
> (aprox 20 mil inserts per minute) ?
>
> equivalent to *select distinct key from my_table *for this table
>
> *CREATE TABLE my_table (*
>
> *    key text,*
>
> *    timestamp bigint,*
>
> *    value double,*
>
> *    PRIMARY KEY (key, timestamp) )*
>
> I need to execute this query quite often ( every couple of minutes )
>
> I can of course maintain a table to hold only unique set of keys but this
> is of course error prone so I rather avoid it. but it's an option.
>
> Cheers
>
> Avi
>

Mime
View raw message