cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anishek Agarwal <>
Subject Re: Reading hundreds of thousands of rows at once?
Date Wed, 22 Apr 2015 08:25:06 GMT
I think these will help speed up

- removing compression
- you have lot of independent columns mentioned. If you are always going to
query all of them together one other thing that will help is have a full
json(or some custom obj representation) of the value data and change the
model to just have survey_id, hour_created,respondent_id, *json_value*

On Wed, Apr 22, 2015 at 1:09 PM, John Anderson <> wrote:

> Hey, I'm looking at querying around 500,000 rows that I need to pull into
> a Pandas data frame for processing.  Currently testing this on a single
> cassandra node it takes around 21 seconds:
> I tried introducing multiprocessing so I could use 4 processes at a time
> to query this and I got it down to 14 seconds:
> Although shaving off 7 seconds is great it still isn't really where I
> would like to be in regards to performance, for this many rows I'd really
> like to get down to a max of 1-2 seconds query time.
> What types of optimization's can I make to improve the read performance
> when querying a large set of data?  Will this timing speed up linearly as I
> add more nodes?
> This is what the schema looks like currently:
> I'm not tied to the current schema at all, its mostly just a replication
> of what we have in SQL Server. I'm more interested in what things I can
> change to make querying it faster.
> Thanks,
> John

View raw message