cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "nicolas ginder (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-12707) JVM out of memory when querying an extra-large partition with lots of tombstones
Date Mon, 07 Nov 2016 11:30:58 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

nicolas ginder updated CASSANDRA-12707:
---------------------------------------
    Reproduced In: 2.1.x, 2.2.x  (was: 2.1.x)

> JVM out of memory when querying an extra-large partition with lots of tombstones
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-12707
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12707
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: nicolas ginder
>             Fix For: 2.1.x, 2.2.x
>
>
> We have an extra large partition of 40 million cells where most of the cells were deleted.
When querying this partition with a slice query, Cassandra runs out of memory as tombstones
fill up the JVM heap. After debugging one of the large SSTable we found that this part of
the code loads all the tombstones.
> In org.apache.cassandra.db.filter.QueryFilter
> ...
> public static Iterator<Cell> gatherTombstones(final ColumnFamily returnCF, final
Iterator<? extends OnDiskAtom> iter)
>     {
> ...
> while (iter.hasNext())
>                 {
>                     OnDiskAtom atom = iter.next();
>                     if (atom instanceof Cell)
>                     {
>                         next = (Cell)atom;
>                         break;
>                     }
>                     else
>                     {
>                         returnCF.addAtom(atom);
>                     }
>                 }
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message