incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tupshin Harper <tups...@tupshin.com>
Subject Re: Inserting with large number of column
Date Mon, 07 Apr 2014 13:05:11 GMT
More details would be helpful (exact schema),  method of inserting data,
etc)  but you can try just doing dropping the indices and recreate them
after the import is finished.

-Tupshin
On Apr 7, 2014 8:53 AM, "Fasika Daksa" <cassandra.daks@gmail.com> wrote:

> We are running different workload test on Cassandra and Redis for
> benchmarking. We wrote a java client to read, write and evaluate the
> elapsed time of different test cases. Cassandra was doing great until we
> introduced 20'000 number of cols...... the insertion is running for a day
> and then i stopped it.
>
> First I create the table, index all the columns then insert the data. I
> looked in to the process and the part it is taking too long is the indexing
> part. We need to index all the columns because we use all or part of the
> columns depending on the query generator.
>
>
> Can you see a potential solution for my case? Is there any way to optimize
> the indexing?....or generally the insertion? I also tried indexing after
> insertion but it is all the same.
>
>
> we are running this experiment on a single machine with 196GB of ram ...
> 1.6 TB of disk space and 8core CPU...
>
> cqlsh 4.1.0 | Cassandra 2.0.3
>

Mime
View raw message