cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jack Krupansky <jack.krupan...@gmail.com>
Subject Re: Out of memory on wide row read
Date Tue, 19 May 2015 13:34:36 GMT
Shame on me for not noticing that you uttered the magic anti-pattern word -
Thrift. Yeah, the standard response to any inquiry concerning Thrift is
always that you should be migrating to CQL3.

-- Jack Krupansky

On Tue, May 19, 2015 at 3:13 AM, Antoine Blanchet <
a.blanchet@abc-arbitrage.com> wrote:

> The issue has been closed by Jonathan Ellis. The limit is useless in CQL
> because of the automatic paging feature
> <http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0>,
> that's cool. But this feature will not be add to the Thrift API. Subject
> closed :).
>
> On Mon, May 18, 2015 at 6:05 PM, Antoine Blanchet <
> a.blanchet@abc-arbitrage.com> wrote:
>
>> Done, https://issues.apache.org/jira/browse/CASSANDRA-9413 . Feel free
>> to improve the description, I've only copy/paste the first message from
>> Kévin.
>>
>> Thanks.
>>
>> On Fri, May 15, 2015 at 9:56 PM, Alprema <alprema@alprema.com> wrote:
>>
>>> I William file a jira for that, thanks
>>> On May 12, 2015 10:15 PM, "Jack Krupansky" <jack.krupansky@gmail.com>
>>> wrote:
>>>
>>>> Sounds like it's worth a Jira - Cassandra should protect itself from
>>>> innocent mistakes or excessive requests from clients. Maybe there should
be
>>>> a timeout or result size (bytes in addition to count) limit. Something.
>>>> Anything. But OOM seems a tad unfriendly for an innocent mistake. In this
>>>> particular case, maybe Cassandra could detect the total row size/slice
>>>> being read and error out on a configurable limiit.
>>>>
>>>> -- Jack Krupansky
>>>>
>>>> On Tue, May 12, 2015 at 1:57 PM, Robert Coli <rcoli@eventbrite.com>
>>>> wrote:
>>>>
>>>>> On Tue, May 12, 2015 at 8:43 AM, Kévin LOVATO <klovato@alprema.com>
>>>>> wrote:
>>>>>
>>>>>> My question is the following: Is it possible to prevent Cassandra
>>>>>> from OOM'ing when a client does this kind of requests? I'd rather
have an
>>>>>> error thrown to the client than a multi-server crash.
>>>>>>
>>>>>
>>>>> You can provide a default LIMIT clause, but this is based on number of
>>>>> results and not size.
>>>>>
>>>>> Other than that, there are not really great options.
>>>>>
>>>>> =Rob
>>>>>
>>>>>
>>>>
>>>>
>>
>>
>> --
>> Antoine Blanchet
>> ABC Arbitrage Asset Management
>> http://www.abc-arbitrage.com/
>>
>
>
>
> --
> Antoine Blanchet
> ABC Arbitrage Asset Management
> http://www.abc-arbitrage.com/
>
>
> ----------------------------------------------------------------------------------------------------
> *ABC arbitrage, partenaire officiel du skipper Jean-Pierre Dick // ABC
> arbitrage, official partner of skipper Jean-Pierre Dick // www.jpdick.com
> <http://www.jpdick.com>*
> Please consider your environmental responsibility before printing this
> email
>
> *********************************************************************************
> Ce message peut contenir des informations confidentielles. Les idées et
> opinions presentées dans ce message sont celles de son auteur, et ne
> représentent pas nécessairement celles du groupe ABC arbitrage.
> Au cas où il ne vous serait pas destiné,merci d'en aviser l'expéditeur
> immédiatement et de le supprimer.
>
> This message may contain confidential information. Any views or opinions
> presented are solely those of its author and do not necessarily represent
> those of ABC arbitrage.
> If you are not the intended recipient, please notify the sender
> immediately and delete it.
>
> *********************************************************************************
>
>

Mime
View raw message