cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (CASSANDRA-9413) Add a default limit size (in bytes) for requests
Date Mon, 25 May 2015 14:19:17 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jonathan Ellis resolved CASSANDRA-9413.
---------------------------------------
    Resolution: Won't Fix

Such a safeguard could still be circumvented with a sufficiently large number of concurrent
clients.  You really need to go all the way and bound the total memory in use across all requests
which is not a quick hack.

> Add a default limit size (in bytes) for requests
> ------------------------------------------------
>
>                 Key: CASSANDRA-9413
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9413
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>         Environment: Cassandra 2.0.10, requested using Thrift
>            Reporter: Antoine Blanchet
>
> We experienced a crash on our production cluster following a massive wide row read using
Thrift
> A client tried to read a wide row (~4GB of raw data) without specifying any
> slice condition, which resulted in the crash of multiple nodes (as many as
> the replication factor) after long garbage collections.
> We know that wide rows should not be that big, but it is not the topic here.
> My question is the following: Is it possible to prevent Cassandra from
> OOM'ing when a client does this kind of requests? I'd rather have an error
> thrown to the client than a multi-server crash.
> The issue has already been discussed on the user mailing list, the thread is here : https://www.mail-archive.com/user@cassandra.apache.org/msg42340.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message