cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward Capriolo (JIRA)" <>
Subject [jira] Commented: (CASSANDRA-1426) Bring back RowWarningThresholdInMB and set it low
Date Tue, 24 Aug 2010 22:09:24 GMT


Edward Capriolo commented on CASSANDRA-1426:

I am for smarter row cache as well, as long as there is no other issues that large rows can
cause. Just putting a scenario out there, getSlicing on a large row. OOM's are not the only
problem. Any activity that does not OOM but causes a random GC/JVM pause. Those are thing
things I worry about, one bad row spoiling the show :)
Sorry for a RTFM question but what size is the in-memory compaction threshold, and can it
be changed on the user end?

> Bring back RowWarningThresholdInMB and set it low
> -------------------------------------------------
>                 Key: CASSANDRA-1426
>                 URL:
>             Project: Cassandra
>          Issue Type: New Feature
>    Affects Versions: 0.7 beta 1
>            Reporter: Edward Capriolo
> The problem with big rows in 6.0 and 7.0 is they tend to cause OOM with row cache and
other memory problems. CFStats shows us the MaximumSizedRow but it does not show which row
this is. Applications that have to scan all the data on a node to turn up a big row are intensive
and while they are running they lower cache hit rate significantly.
> Even though Cassandra 7.0 can accommodate larger rows then 6.X, most use cases would
never have rows that go over 2 MB.
> Please consider bringing this feature back and setting it low. <RowWarningThresholdInMB>10</RowWarningThresholdInMB>.
With this admins can monitor logs and point out large rows before they get out of hand and
cause mysterious crashes.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message