cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benedict (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables
Date Wed, 02 Apr 2014 20:57:24 GMT


Benedict commented on CASSANDRA-6694:

Sure. There are now four main concepts that interact in various ways:

ByteBufferAllocator, Pool & PoolAllocator,  ByteBufferPool.Allocator (and its implementors)
and NativeAllocator

# ByteBufferAllocator, as the name suggests, is a straightforward abstraction for the allocation/cloning
of NIO ByteBuffers. It does not directly support any concept of pooling, nor understand the
use of OpOrder for write guarding.
# Pool and PoolAllocator are now independent of any concept of *what* they allocate - they
simply manage the idea of the memory resources, and leave the actual allocation to the implementing
# ByteBufferPool.Allocator is the combination of PoolAllocator and BBA, although it itself
isn't a BBA - it constructs a "context" BBA when given a writeOp that is guarding the allocation.
This helps to keep the concept of write guarded pooled allocations cleanly separated from
simple BBA allocations, whilst using the same code paths.  Note that BBP.A is _abstract_ and
is implemented by SlabAllocator and HeapPool.Allocator. We might consider renaming SlabAllocator
to HeapSlabAllocator to keep naming consistent and help clarity.
# NativeAllocator is, by contrast, the extension of PoolAllocator that supports native allocations
- that is, any object that extends NativeAllocation.

> Slightly More Off-Heap Memtables
> --------------------------------
>                 Key: CASSANDRA-6694
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Benedict
>            Assignee: Benedict
>              Labels: performance
>             Fix For: 2.1 beta2
> The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as the on-heap
overhead is still very large. It should not be tremendously difficult to extend these changes
so that we allocate entire Cells off-heap, instead of multiple BBs per Cell (with all their
associated overhead).
> The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 bytes per
cell on average for the btree overhead, for a total overhead of around 20-22 bytes). This
translates to 8-byte object overhead, 4-byte address (we will do alignment tricks like the
VM to allow us to address a reasonably large memory space, although this trick is unlikely
to last us forever, at which point we will have to bite the bullet and accept a 24-byte per
cell overhead), and 4-byte object reference for maintaining our internal list of allocations,
which is unfortunately necessary since we cannot safely (and cheaply) walk the object graph
we allocate otherwise, which is necessary for (allocation-) compaction and pointer rewriting.
> The ugliest thing here is going to be implementing the various CellName instances so
that they may be backed by native memory OR heap memory.

This message was sent by Atlassian JIRA

View raw message