hi
im using the default config from the svn trunk for everything: tables, column, supercolumn, single node etc.

im inserting rows like this

for u in users:
        client.insert('Table1', str(u.uid), 'Super1:attrs:key', str(u.key), timestamp, False)

after about 100k rows, cassandra completely slows down taking for ever to insert or get_slice_super. the commitlog is about 70MB in size.
when i stop and start cassandra the logs are replayed and it is fast again, till it reaches the same point when it becomes really slow.

is this expected? is there something in the config that can be changed so this doesnt happen?

thanks



    <Tables>
        <Table Name="Table1">
            <!-- The fraction of keys per sstable whose locations we
                 keep in memory in "mostly LRU" order.  (JUST the key
                 locations, NOT any column values.)

                 The amount of memory used by the default setting of
                 0.01 is comparable to the amount used by the internal
                 per-sstable key index. Consider increasing this is
                 fine if you have fewer, wider rows.  Set to 0 to
                 disable entirely.
            -->
            <KeysCachedFraction>0.01</KeysCachedFraction>
            <!-- if FlushPeriodInMinutes is configured and positive, it will be
                 flushed to disk with that period whether it is dirty or not.
                 This is intended for lightly-used columnfamilies so that they
                 do not prevent commitlog segments from being purged. -->
            <ColumnFamily ColumnSort="Name" Name="Standard1" FlushPeriodInMinutes="60"/>
            <ColumnFamily ColumnSort="Name" Name="Standard2"/>
            <ColumnFamily ColumnSort="Time" Name="StandardByTime1"/>
            <ColumnFamily ColumnType="Super" Name="Super1"/>
        </Table>
    </Tables>