cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Lerer (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows
Date Sat, 03 Sep 2016 12:29:20 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15461009#comment-15461009
] 

Benjamin Lerer commented on CASSANDRA-11528:
--------------------------------------------

A {{count((*))}} is not using more memory than a {{SELECT (*)}} as the as the query is paged
internally. In fact in your version it should use even less memory. 

If you could provide us a testcase to reproduce the problem or an heap dump, it will help
us to investigate this problem.

> Server Crash when select returns more than a few hundred rows
> -------------------------------------------------------------
>
>                 Key: CASSANDRA-11528
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: windows 7, 8 GB machine
>            Reporter: Mattias W
>             Fix For: 3.x
>
>         Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table at a row,
I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single partition
got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is so quick,
so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs and byte[]
stored there, size between 100kb - 4Mb. Total size for that table is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to at least
get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message