cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stu Hood (JIRA)" <>
Subject [jira] Commented: (CASSANDRA-1426) Bring back RowWarningThresholdInMB and set it low
Date Tue, 24 Aug 2010 21:43:16 GMT


Stu Hood commented on CASSANDRA-1426:

OOMs due to the rows in the row cache growing too large is probably the core problem: we should
probably treat the disease rather than the symptom. If we could find a way to store partial
rows in the row cache, we could enable it by default, and forget about OOMs.

> Bring back RowWarningThresholdInMB and set it low
> -------------------------------------------------
>                 Key: CASSANDRA-1426
>                 URL:
>             Project: Cassandra
>          Issue Type: New Feature
>    Affects Versions: 0.7 beta 1
>            Reporter: Edward Capriolo
> The problem with big rows in 6.0 and 7.0 is they tend to cause OOM with row cache and
other memory problems. CFStats shows us the MaximumSizedRow but it does not show which row
this is. Applications that have to scan all the data on a node to turn up a big row are intensive
and while they are running they lower cache hit rate significantly.
> Even though Cassandra 7.0 can accommodate larger rows then 6.X, most use cases would
never have rows that go over 2 MB.
> Please consider bringing this feature back and setting it low. <RowWarningThresholdInMB>10</RowWarningThresholdInMB>.
With this admins can monitor logs and point out large rows before they get out of hand and
cause mysterious crashes.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message