cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Colby <jonathan.co...@gmail.com>
Subject Re: flush_largest_memtables_at messages in 7.4
Date Tue, 12 Apr 2011 19:22:22 GMT
your jvm heap has reached 78% so cassandra automatically flushes its memtables.   you need
to explain more about your configuration.   32 or 64 bit OS, what is max heap, how much ram
installed?

If this happens under stress test conditions its probably understandable.  you should look
into graphing your memory usage, or use the jconsole to graph heap during your tests.

On Apr 12, 2011, at 8:36 PM, mcasandra wrote:

> I am using cassandra 7.4 and getting these messages.
> 
> Heap is 0.7802529021498031 full. You may need to reduce memtable and/or
> cache sizes Cassandra will now flush up to the two largest memtables to free
> up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if
> you don't want Cassandra to do this automatically
> 
> How do I verify that I need to adjust any thresholds? And how to calculate
> correct value?
> 
> When I got this message only reads were occuring.
> 
> create keyspace StressKeyspace
>    with replication_factor = 3
>    and placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy';
> 
> use StressKeyspace;
> drop column family StressStandard;
> create column family StressStandard
>    with comparator = UTF8Type
>    and keys_cached = 1000000
>    and memtable_flush_after = 1440
>    and memtable_throughput = 128;
> 
> nodetool -h dsdb4 tpstats
> Pool Name                    Active   Pending      Completed
> ReadStage                        32       281         456598
> RequestResponseStage              0         0         797237
> MutationStage                     0         0         499205
> ReadRepairStage                   0         0         149077
> GossipStage                       0         0         217227
> AntiEntropyStage                  0         0              0
> MigrationStage                    0         0            201
> MemtablePostFlusher               0         0           1842
> StreamStage                       0         0              0
> FlushWriter                       0         0           1841
> FILEUTILS-DELETE-POOL             0         0           3670
> MiscStage                         0         0              0
> FlushSorter                       0         0              0
> InternalResponseStage             0         0              0
> HintedHandoff                     0         0             15
> 
> cfstats
> 
> Keyspace: StressKeyspace
>        Read Count: 460988
>        Read Latency: 38.07654727454945 ms.
>        Write Count: 499205
>        Write Latency: 0.007409593253272703 ms.
>        Pending Tasks: 0
>                Column Family: StressStandard
>                SSTable count: 9
>                Space used (live): 247408645485
>                Space used (total): 247408645485
>                Memtable Columns Count: 0
>                Memtable Data Size: 0
>                Memtable Switch Count: 1878
>                Read Count: 460989
>                Read Latency: 28.237 ms.
>                Write Count: 499205
>                Write Latency: NaN ms.
>                Pending Tasks: 0
>                Key cache capacity: 1000000
>                Key cache size: 299862
>                Key cache hit rate: 0.6031833150384193
>                Row cache: disabled
>                Compacted row minimum size: 219343
>                Compacted row maximum size: 5839588
>                Compacted row mean size: 497474
> 
> 
> --
> View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6266221.html
> Sent from the cassandra-user@incubator.apache.org mailing list archive at Nabble.com.


Mime
View raw message