directmemory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christoph Engelbert <>
Subject Re: New buffer backend
Date Tue, 23 Jul 2013 04:02:34 GMT
Am 23.07.2013 00:54, schrieb Jan Kotek:
>> very high contention so I had a deep look at leaving
>> out most positions where contention can arise.
> My solution was global structural lock with segment locks (I think you call it 
> partitions). 

Depending on the Partitioning algorithm a partition is assigned to a
core, thread, ... and the partition pool is only locked to assign a
new partition if all fragments are used.

> Record layout change (allocate/deallocate) is done under global lock, this 
> code is optimized and requires typically 16 bytes IO.
Here I normally never need to lock since there can only one thread
at a time for core local or thread local.

> Than reading/writing actual data is done in parallel under segmented lock. 
> It will be interesting to compare those two approaches. 
Sure :-)
> J.
> On Monday 22 July 2013 22:43:17 Christoph Engelbert wrote:
>> Am 22.07.2013 22:28, schrieb Jan Kotek:
>>> Hi,
>>> I have something similar in MapDB. It is in, a light growable
>>> abstraction ByteBuffers and  FileChannel. Major difference is that Volume
>>> is not thread safe, locking is handled at higher level. It will be
>>> interesting to compare those.
>> The backend was created as a prototype for the company I'm working
>> for and I have very high contention so I had a deep look at leaving
>> out most positions where contention can arise.
>>> Any chance this backed would work for memory mapped files?
>> In theory it should work (with little changes) for both
>> implementations - ByteBuffer backed and Unsafe backed (last one with
>> mapped ByteBuffers but read / write access through Unsafe).
>>> Jan Kotek
>>> On Friday 12 July 2013 20:55:11 Christoph Engelbert wrote:
>>>> Hey guys
>>>> I finally managed to merge everything together :-)
>>>> As stated a few weeks before I made a partitioned buffer system for
>>>> good performance and less contention.
>>>> It had different selection strategies like TLA (Thread Local
>>>> Allocation), a simple RoundRobin or (on Linux and Windows) CLA
>>>> (Processor Core Local Allocation), whereas the last is done using OS
>>>> calls and JNA.
>>>> It features ByteBuffers for Heap and Offheap as well as Unsafe. It
>>>> has growing buffers (if slice is full a new one is selected) and can
>>>> handle data bigger than Integer.MAX_VALUE (it uses full long
>>>> position pointers).
>>>> It is located in directmemory-buffer submodule since it was it's own
>>>> project and it is fully usable even without having to use
>>>> DirectMemory (I would suggest to give users the chance to use it on
>>>> their own).
>>>> As stated before it introduces a new dependency and especially a
>>>> platform depending one. At least it is a optional dependency and CLA
>>>> is deactivated if JNA is not available on the classpath.
>>>> I although added 3 properties to configure the default strategy of
>>>> creating the PartitionBufferPools:
>>>> directmemory.buffer.pooling.disabled: true deactivates pooling and
>>>> uses lazy creation and immediate destroying on release
>>>> directmemory.buffer.unsafe.enabled: true activates the usage of
>>>> sun.misc.Unsafe raw memory access (a check if unsafe is available is
>>>> applied too)
>>>> directmemory.buffer.offheap.enabled: true enabled DirectByteBuffer
>>>> usage for for the non-unsafe-pools
>>>> I merged it into my local fork of DirectMemory on github [1] but had
>>>> to adjust the API of DirectMemory on some places. I introduced a
>>>> MemoryManagerFactory which handles creation of the different
>>>> MemoryManagers (as the old ones - partly renamed -
>>>> UnsafeMemoryManager and AllocatorMemoryManager) and the new
>>>> PartitionBufferMemoryManager.
>>>> The Pointer-API is now able to use PartitionBuffers as well as the
>>>> old way using byte[].
>>>> I'm not yet finished, working on making all unittests pass again but
>>>> I would appreciate some opinions, discussion on the new API changes.
>>>> Cheers
>>>> Chris
>>>> [1]

View raw message