cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-1426) Bring back RowWarningThresholdInMB and set it low
Date Tue, 24 Aug 2010 22:15:19 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-1426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12902103#action_12902103
] 

Jonathan Ellis commented on CASSANDRA-1426:
-------------------------------------------

$ grep memory.compaction conf/cassandra.yaml 
in_memory_compaction_limit_in_mb: 64


> Bring back RowWarningThresholdInMB and set it low
> -------------------------------------------------
>
>                 Key: CASSANDRA-1426
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1426
>             Project: Cassandra
>          Issue Type: New Feature
>    Affects Versions: 0.7 beta 1
>            Reporter: Edward Capriolo
>
> The problem with big rows in 6.0 and 7.0 is they tend to cause OOM with row cache and
other memory problems. CFStats shows us the MaximumSizedRow but it does not show which row
this is. Applications that have to scan all the data on a node to turn up a big row are intensive
and while they are running they lower cache hit rate significantly.
> Even though Cassandra 7.0 can accommodate larger rows then 6.X, most use cases would
never have rows that go over 2 MB.
> Please consider bringing this feature back and setting it low. <RowWarningThresholdInMB>10</RowWarningThresholdInMB>.
With this admins can monitor logs and point out large rows before they get out of hand and
cause mysterious crashes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message