incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fasika Daksa <cassandra.d...@gmail.com>
Subject Inserting with large number of column
Date Mon, 07 Apr 2014 12:52:32 GMT
We are running different workload test on Cassandra and Redis for
benchmarking. We wrote a java client to read, write and evaluate the
elapsed time of different test cases. Cassandra was doing great until we
introduced 20'000 number of cols...... the insertion is running for a day
and then i stopped it.

First I create the table, index all the columns then insert the data. I
looked in to the process and the part it is taking too long is the indexing
part. We need to index all the columns because we use all or part of the
columns depending on the query generator.


Can you see a potential solution for my case? Is there any way to optimize
the indexing?....or generally the insertion? I also tried indexing after
insertion but it is all the same.


we are running this experiment on a single machine with 196GB of ram ...
1.6 TB of disk space and 8core CPU...

cqlsh 4.1.0 | Cassandra 2.0.3

Mime
View raw message