cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (CASSANDRA-1040) read failure during flush
Date Thu, 06 May 2010 03:29:51 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12864635#action_12864635
] 

Jonathan Ellis edited comment on CASSANDRA-1040 at 5/5/10 11:28 PM:
--------------------------------------------------------------------

Brandon's code to reproduce:

{code}


#!/usr/bin/python
from telephus.protocol import ManagedCassandraClientFactory
from telephus.client import CassandraClient
from twisted.internet import defer

HOST = 'cassandra-6'
PORT = 9160
KEYSPACE = 'Keyspace1'
CF = 'Standard1'
SCF = 'Super1'
colname = 'foo'
scname = 'bar'

@defer.inlineCallbacks
def dostuff(client):
    while True:
        print "inserting"
        for i in xrange(5000):
            yield client.insert(str(i), CF, 'test', column=colname)
        print "removing"
        res = yield client.get_range_slice(CF, count=10000)
        for ks in res:
            if len(ks.columns) > 0:
                yield client.remove(ks.key, CF)
        print "checking"
        res = yield client.get_range_slice(CF, count=10000)
        for ks in res:
            assert len(ks.columns) == 0
        print "ok"

if __name__ == '__main__':
    from twisted.internet import reactor
    from twisted.python import log
    import sys
    log.startLogging(sys.stdout)

    f = ManagedCassandraClientFactory()
    c = CassandraClient(f, KEYSPACE)
    dostuff(c)
    reactor.connectTCP(HOST, PORT, f)
    reactor.run()
{code}


      was (Author: jbellis):
    Brandon's code to reproduce:

<code>


#!/usr/bin/python
from telephus.protocol import ManagedCassandraClientFactory
from telephus.client import CassandraClient
from twisted.internet import defer

HOST = 'cassandra-6'
PORT = 9160
KEYSPACE = 'Keyspace1'
CF = 'Standard1'
SCF = 'Super1'
colname = 'foo'
scname = 'bar'

@defer.inlineCallbacks
def dostuff(client):
    while True:
        print "inserting"
        for i in xrange(5000):
            yield client.insert(str(i), CF, 'test', column=colname)
        print "removing"
        res = yield client.get_range_slice(CF, count=10000)
        for ks in res:
            if len(ks.columns) > 0:
                yield client.remove(ks.key, CF)
        print "checking"
        res = yield client.get_range_slice(CF, count=10000)
        for ks in res:
            assert len(ks.columns) == 0
        print "ok"

if __name__ == '__main__':
    from twisted.internet import reactor
    from twisted.python import log
    import sys
    log.startLogging(sys.stdout)

    f = ManagedCassandraClientFactory()
    c = CassandraClient(f, KEYSPACE)
    dostuff(c)
    reactor.connectTCP(HOST, PORT, f)
    reactor.run()
<code>

  
> read failure during flush
> -------------------------
>
>                 Key: CASSANDRA-1040
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1040
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>            Priority: Critical
>             Fix For: 0.6.2
>
>
> Joost Ouwerkerk writes:
> 	
> On a single-node cassandra cluster with basic config (-Xmx:1G)
> loop {
>   * insert 5,000 records in a single columnfamily with UUID keys and
> random string values (between 1 and 1000 chars) in 5 different columns
> spanning two different supercolumns
>   * delete all the data by iterating over the rows with
> get_range_slices(ONE) and calling remove(QUORUM) on each row id
> returned (path containing only columnfamily)
>   * count number of non-tombstone rows by iterating over the rows
> with get_range_slices(ONE) and testing data.  Break if not zero.
> }
> while this is running, call "bin/nodetool -h localhost -p 8081 flush KeySpace" in the
background every minute or so.  When the data hits some critical size, the loop will break.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message