cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Created: (CASSANDRA-1040) read failure during flush
Date Fri, 30 Apr 2010 13:34:54 GMT
read failure during flush
-------------------------

                 Key: CASSANDRA-1040
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1040
             Project: Cassandra
          Issue Type: Bug
          Components: Core
            Reporter: Jonathan Ellis
            Assignee: Jonathan Ellis
            Priority: Critical


Joost Ouwerkerk writes:
	
On a single-node cassandra cluster with basic config (-Xmx:1G)
loop {
  * insert 5,000 records in a single columnfamily with UUID keys and
random string values (between 1 and 1000 chars) in 5 different columns
spanning two different supercolumns
  * delete all the data by iterating over the rows with
get_range_slices(ONE) and calling remove(QUORUM) on each row id
returned (path containing only columnfamily)
  * count number of non-tombstone rows by iterating over the rows
with get_range_slices(ONE) and testing data.  Break if not zero.
}

while this is running, call "bin/nodetool -h localhost -p 8081 flush KeySpace" in the background
every minute or so.  When the data hits some critical size, the loop will break.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message