Thank for you replies.
1) I can not create raw each X time , since it will not allow me to get a complete list of currently active records ( this is the only reason i keep this raw initially ).
2) As for compaction i thought that only raw ids are cached and not columns itself. I have completed compaction and it indeed cleared most of data ( 75% of complete table ,and there are some other raws ).
It looks like it did not cleared all the data from disk ( compaction should set table into single file , while other files still left ) , but showed in log that data cleared , also after compaction this raw became responsive.

Most of our data is wrotten once and kept in history , however we have also counter columns which are not working good ( had troubles with that ) and several places where we use create / delete approach.Now i understand why our data grows so much , it just does not clears old data at all , but marks it as tombstone...
What NOSQL database would you recommend for such usages ( write onces read many mixed with counter columns mixed with read/write oftenly data)?

Thanks and best regards
Yulian Oifa




On Mon, Apr 7, 2014 at 5:57 PM, Lukas Steiblys <lukas@doubledutch.me> wrote:
Deleting a column simply produces a tombstone for that column, as far as I know. It’s probably going through all the columns with tombstones and timing out. Compacting more often should help, but maybe Cassandra isn’t the best choice overall for what you’re trying to do.
 
Lukas
 
Sent: Sunday, April 6, 2014 11:54 AM
Subject: Transaction Timeout on get_count
 
Hello
I am having raw in which approximately 100 values is written per minute.
Those columns are then deleted ( it contains active records list ).
When i am trying to execute get_count on that raw i get transaction timeout , even while the raw is empty.
I dont see anything in cassandra log on neither node , pending tasks are zero.
What could be a reason for that , and how can it be resolved?
Best regards
Yulian Oifa