cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Brown (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-6609) Reduce Bloom Filter Garbage Allocation
Date Wed, 22 Jan 2014 19:25:26 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13879062#comment-13879062
] 

Jason Brown commented on CASSANDRA-6609:
----------------------------------------

bq. Escape analysis

Ahh, I'll buy that, cautiously :). 

As an alternative, and I didn't look too hard if the code can bend this way, you might be
able to get away with just passing around 2 longs instead of a long[], as we're dealing with,
more or less, a 128-bit value. Then you avoid the array object altogether, and those will
be stack allocated. 

> Reduce Bloom Filter Garbage Allocation
> --------------------------------------
>
>                 Key: CASSANDRA-6609
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6609
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Benedict
>         Attachments: tmp.diff
>
>
> Just spotted that we allocate potentially large amounts of garbage on bloom filter lookups,
since we allocate a new long[] for each hash() and to store the bucket indexes we visit, in
a manner that guarantees they are allocated on heap. With a lot of sstables and many requests,
this could easily be hundreds of megabytes of young gen churn per second.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message